All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/6] mempool: add bucket mempool driver
@ 2017-11-24 16:06 Andrew Rybchenko
  2017-11-24 16:06 ` [RFC PATCH 1/6] mempool: implement abstract mempool info API Andrew Rybchenko
                   ` (11 more replies)
  0 siblings, 12 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2017-11-24 16:06 UTC (permalink / raw)
  To: dev; +Cc: Olivier Matz

The patch series adds bucket mempool driver which allows to allocate
(both physically and virtually) contiguous blocks of objects and adds
mempool API to do it. It is still capable to provide separate objects,
but it is definitely more heavy-weight than ring/stack drivers.

The target usecase is dequeue in blocks and enqueue separate objects
back (which are collected in buckets to be dequeued). So, the memory
pool with bucket driver is created by an application and provided to
networking PMD receive queue. The choice of bucket driver is done using
rte_eth_dev_pool_ops_supported(). A PMD that relies upon contiguous
block allocation should report the bucket driver as the only supported
and preferred one.

The number of objects in the contiguous block is a function of bucket
memory size (.config option) and total element size.

As I understand it breaks ABI so it requires 3 acks in accordance with
policy, deprecation notice and mempool shared library version bump.
If there is a way to avoid ABI breakage, please, let us know.

In any case we would like to start from RFC discussion. Comments and
ideas are welcome.

The target DPDK release is 18.05.

Artem V. Andreev (6):
  mempool: implement abstract mempool info API
  mempool: implement clustered object allocation
  mempool/bucket: implement bucket mempool manager
  mempool: add a function to flush default cache
  mempool: support block dequeue operation
  mempool/bucket: implement block dequeue operation

 MAINTAINERS                                        |   9 +
 config/common_base                                 |   2 +
 drivers/mempool/Makefile                           |   1 +
 drivers/mempool/bucket/Makefile                    |  49 ++
 drivers/mempool/bucket/rte_mempool_bucket.c        | 559 +++++++++++++++++++++
 .../mempool/bucket/rte_mempool_bucket_version.map  |   4 +
 lib/librte_mempool/rte_mempool.c                   |  41 +-
 lib/librte_mempool/rte_mempool.h                   | 179 ++++++-
 lib/librte_mempool/rte_mempool_ops.c               |  16 +
 mk/rte.app.mk                                      |   1 +
 test/test/test_mempool.c                           |   2 +-
 11 files changed, 857 insertions(+), 6 deletions(-)
 create mode 100644 drivers/mempool/bucket/Makefile
 create mode 100644 drivers/mempool/bucket/rte_mempool_bucket.c
 create mode 100644 drivers/mempool/bucket/rte_mempool_bucket_version.map

-- 
2.7.4

^ permalink raw reply	[flat|nested] 197+ messages in thread

* [RFC PATCH 1/6] mempool: implement abstract mempool info API
  2017-11-24 16:06 [RFC PATCH 0/6] mempool: add bucket mempool driver Andrew Rybchenko
@ 2017-11-24 16:06 ` Andrew Rybchenko
  2017-12-14 13:36   ` Olivier MATZ
  2017-11-24 16:06 ` [RFC PATCH 2/6] mempool: implement clustered object allocation Andrew Rybchenko
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2017-11-24 16:06 UTC (permalink / raw)
  To: dev; +Cc: Olivier Matz, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Primarily, it is intended as a way for the mempool driver to provide
additional information on how it lays up objects inside the mempool.

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 lib/librte_mempool/rte_mempool.h     | 31 +++++++++++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c | 15 +++++++++++++++
 2 files changed, 46 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 721227f..3c59d36 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -217,6 +217,11 @@ struct rte_mempool_memhdr {
 	void *opaque;            /**< Argument passed to the free callback */
 };
 
+/*
+ * Additional information about the mempool
+ */
+struct rte_mempool_info;
+
 /**
  * The RTE mempool structure.
  */
@@ -422,6 +427,12 @@ typedef int (*rte_mempool_get_capabilities_t)(const struct rte_mempool *mp,
 		unsigned int *flags);
 
 /**
+ * Get some additional information about a mempool.
+ */
+typedef int (*rte_mempool_get_info_t)(const struct rte_mempool *mp,
+		struct rte_mempool_info *info);
+
+/**
  * Notify new memory area to mempool.
  */
 typedef int (*rte_mempool_ops_register_memory_area_t)
@@ -443,6 +454,10 @@ struct rte_mempool_ops {
 	 * Notify new memory area to mempool
 	 */
 	rte_mempool_ops_register_memory_area_t register_memory_area;
+	/**
+	 * Get mempool info
+	 */
+	rte_mempool_get_info_t get_info;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -592,6 +607,22 @@ rte_mempool_ops_register_memory_area(const struct rte_mempool *mp,
 				char *vaddr, rte_iova_t iova, size_t len);
 
 /**
+ * @internal wrapper for mempool_ops get_info callback.
+ *
+ * @param mp [in]
+ *   Pointer to the memory pool.
+ * @param info [out]
+ *   Pointer to the rte_mempool_info structure
+ * @return
+ *   - 0: Success; The mempool driver supports retrieving supplementary
+ *        mempool information
+ *   - -ENOTSUP - doesn't support get_info ops (valid case).
+ */
+int
+rte_mempool_ops_get_info(const struct rte_mempool *mp,
+			 struct rte_mempool_info *info);
+
+/**
  * @internal wrapper for mempool_ops free callback.
  *
  * @param mp
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 92b9f90..23de4db 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -88,6 +88,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->get_count = h->get_count;
 	ops->get_capabilities = h->get_capabilities;
 	ops->register_memory_area = h->register_memory_area;
+	ops->get_info = h->get_info;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -152,6 +153,20 @@ rte_mempool_ops_register_memory_area(const struct rte_mempool *mp, char *vaddr,
 	return ops->register_memory_area(mp, vaddr, iova, len);
 }
 
+/* wrapper to get additional mempool info */
+int
+rte_mempool_ops_get_info(const struct rte_mempool *mp,
+			 struct rte_mempool_info *info)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->get_info, -ENOTSUP);
+	return ops->get_info(mp, info);
+}
+
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [RFC PATCH 2/6] mempool: implement clustered object allocation
  2017-11-24 16:06 [RFC PATCH 0/6] mempool: add bucket mempool driver Andrew Rybchenko
  2017-11-24 16:06 ` [RFC PATCH 1/6] mempool: implement abstract mempool info API Andrew Rybchenko
@ 2017-11-24 16:06 ` Andrew Rybchenko
  2017-12-14 13:37   ` Olivier MATZ
  2017-11-24 16:06 ` [RFC PATCH 3/6] mempool/bucket: implement bucket mempool manager Andrew Rybchenko
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2017-11-24 16:06 UTC (permalink / raw)
  To: dev; +Cc: Olivier Matz, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Clustered allocation is required to simplify packaging objects into
buckets and search of the bucket control structure by an object.

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 lib/librte_mempool/rte_mempool.c | 39 +++++++++++++++++++++++++++++++++++----
 lib/librte_mempool/rte_mempool.h | 23 +++++++++++++++++++++--
 test/test/test_mempool.c         |  2 +-
 3 files changed, 57 insertions(+), 7 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index d50dba4..43455a3 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -239,7 +239,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  */
 size_t
 rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
-		      unsigned int flags)
+		      unsigned int flags,
+		      const struct rte_mempool_info *info)
 {
 	size_t obj_per_page, pg_num, pg_sz;
 	unsigned int mask;
@@ -252,6 +253,17 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
 	if (total_elt_sz == 0)
 		return 0;
 
+	if (flags & MEMPOOL_F_CAPA_ALLOCATE_IN_CLUSTERS) {
+		unsigned int align_shift =
+			rte_bsf32(
+				rte_align32pow2(total_elt_sz *
+						info->cluster_size));
+		if (pg_shift < align_shift) {
+			return ((elt_num / info->cluster_size) + 2)
+				<< align_shift;
+		}
+	}
+
 	if (pg_shift == 0)
 		return total_elt_sz * elt_num;
 
@@ -362,6 +374,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	void *opaque)
 {
 	unsigned total_elt_sz;
+	unsigned int page_align_size = 0;
 	unsigned i = 0;
 	size_t off;
 	struct rte_mempool_memhdr *memhdr;
@@ -407,7 +420,11 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	memhdr->free_cb = free_cb;
 	memhdr->opaque = opaque;
 
-	if (mp->flags & MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS)
+	if (mp->flags & MEMPOOL_F_CAPA_ALLOCATE_IN_CLUSTERS) {
+		page_align_size = rte_align32pow2(total_elt_sz *
+						  mp->info.cluster_size);
+		off = RTE_PTR_ALIGN_CEIL(vaddr, page_align_size) - vaddr;
+	} else if (mp->flags & MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS)
 		/* align object start address to a multiple of total_elt_sz */
 		off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
 	else if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
@@ -424,6 +441,10 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 			mempool_add_elem(mp, (char *)vaddr + off, iova + off);
 		off += mp->elt_size + mp->trailer_size;
 		i++;
+		if ((mp->flags & MEMPOOL_F_CAPA_ALLOCATE_IN_CLUSTERS) &&
+		    (i % mp->info.cluster_size) == 0)
+			off = RTE_PTR_ALIGN_CEIL((char *)vaddr + off,
+						 page_align_size) - vaddr;
 	}
 
 	/* not enough room to store one object */
@@ -579,6 +600,16 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	if ((ret < 0) && (ret != -ENOTSUP))
 		return ret;
 
+	ret = rte_mempool_ops_get_info(mp, &mp->info);
+	if ((ret < 0) && (ret != -ENOTSUP))
+		return ret;
+	if (ret == -ENOTSUP)
+		mp->info.cluster_size = 0;
+
+	if ((mp->info.cluster_size == 0) &&
+	    (mp_flags & MEMPOOL_F_CAPA_ALLOCATE_IN_CLUSTERS))
+		return -EINVAL;
+
 	/* update mempool capabilities */
 	mp->flags |= mp_flags;
 
@@ -595,7 +626,7 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 	for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) {
 		size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift,
-						mp->flags);
+					     mp->flags, &mp->info);
 
 		ret = snprintf(mz_name, sizeof(mz_name),
 			RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id);
@@ -653,7 +684,7 @@ get_anon_size(const struct rte_mempool *mp)
 	pg_shift = rte_bsf32(pg_sz);
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 	size = rte_mempool_xmem_size(mp->size, total_elt_sz, pg_shift,
-					mp->flags);
+				       mp->flags, &mp->info);
 
 	return size;
 }
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 3c59d36..9bcb8b7 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -220,7 +220,10 @@ struct rte_mempool_memhdr {
 /*
  * Additional information about the mempool
  */
-struct rte_mempool_info;
+struct rte_mempool_info {
+	/** Number of objects in a cluster */
+	unsigned int cluster_size;
+};
 
 /**
  * The RTE mempool structure.
@@ -265,6 +268,7 @@ struct rte_mempool {
 	struct rte_mempool_objhdr_list elt_list; /**< List of objects in pool */
 	uint32_t nb_mem_chunks;          /**< Number of memory chunks */
 	struct rte_mempool_memhdr_list mem_list; /**< List of memory chunks */
+	struct rte_mempool_info info; /**< Additional mempool info */
 
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 	/** Per-lcore statistics. */
@@ -298,6 +302,17 @@ struct rte_mempool {
 #define MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS 0x0080
 
 /**
+ * This capability flag is advertised by a mempool handler. Used for a case
+ * where mempool driver wants clusters of objects start at a power-of-two
+ * boundary
+ *
+ * Note:
+ * - This flag should not be passed by application.
+ *   Flag used for mempool driver only.
+ */
+#define MEMPOOL_F_CAPA_ALLOCATE_IN_CLUSTERS 0x0100
+
+/**
  * @internal When debug is enabled, store some statistics.
  *
  * @param mp
@@ -1605,11 +1620,15 @@ uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  *   LOG2 of the physical pages size. If set to 0, ignore page boundaries.
  * @param flags
  *  The mempool flags.
+ * @param info
+ *  A pointer to the mempool's additional info (may be NULL unless
+ *  MEMPOOL_F_CAPA_ALLOCATE_IN_CLUSTERS is set in @arg flags)
  * @return
  *   Required memory size aligned at page boundary.
  */
 size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
-	uint32_t pg_shift, unsigned int flags);
+			     uint32_t pg_shift, unsigned int flags,
+			     const struct rte_mempool_info *info);
 
 /**
  * Get the size of memory required to store mempool elements.
diff --git a/test/test/test_mempool.c b/test/test/test_mempool.c
index 37ead50..f4bb9a9 100644
--- a/test/test/test_mempool.c
+++ b/test/test/test_mempool.c
@@ -485,7 +485,7 @@ test_mempool_xmem_misc(void)
 	elt_num = MAX_KEEP;
 	total_size = rte_mempool_calc_obj_size(MEMPOOL_ELT_SIZE, 0, NULL);
 	sz = rte_mempool_xmem_size(elt_num, total_size, MEMPOOL_PG_SHIFT_MAX,
-					0);
+				   0, NULL);
 
 	usz = rte_mempool_xmem_usage(NULL, elt_num, total_size, 0, 1,
 		MEMPOOL_PG_SHIFT_MAX, 0);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [RFC PATCH 3/6] mempool/bucket: implement bucket mempool manager
  2017-11-24 16:06 [RFC PATCH 0/6] mempool: add bucket mempool driver Andrew Rybchenko
  2017-11-24 16:06 ` [RFC PATCH 1/6] mempool: implement abstract mempool info API Andrew Rybchenko
  2017-11-24 16:06 ` [RFC PATCH 2/6] mempool: implement clustered object allocation Andrew Rybchenko
@ 2017-11-24 16:06 ` Andrew Rybchenko
  2017-12-14 13:38   ` Olivier MATZ
  2017-11-24 16:06 ` [RFC PATCH 4/6] mempool: add a function to flush default cache Andrew Rybchenko
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2017-11-24 16:06 UTC (permalink / raw)
  To: dev; +Cc: Olivier Matz, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

The manager provides a way to allocate physically and virtually
contiguous set of objects.

Note: due to the way objects are organized in the bucket manager,
the get_avail_count may return less objects than were enqueued.
That breaks the expectation of mempool and mempool_perf tests.

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 MAINTAINERS                                        |   9 +
 config/common_base                                 |   2 +
 drivers/mempool/Makefile                           |   1 +
 drivers/mempool/bucket/Makefile                    |  49 ++
 drivers/mempool/bucket/rte_mempool_bucket.c        | 521 +++++++++++++++++++++
 .../mempool/bucket/rte_mempool_bucket_version.map  |   4 +
 mk/rte.app.mk                                      |   1 +
 7 files changed, 587 insertions(+)
 create mode 100644 drivers/mempool/bucket/Makefile
 create mode 100644 drivers/mempool/bucket/rte_mempool_bucket.c
 create mode 100644 drivers/mempool/bucket/rte_mempool_bucket_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index f0baeb4..144fd1d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -293,6 +293,15 @@ F: test/test/test_event_eth_rx_adapter.c
 F: doc/guides/prog_guide/event_ethernet_rx_adapter.rst
 
 
+Memory Pool Drivers
+-------------------
+
+Bucket memory pool
+M: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
+M: Andrew Rybchenko <arybchenko@solarflare.com>
+F: drivers/mempool/bucket/
+
+
 Bus Drivers
 -----------
 
diff --git a/config/common_base b/config/common_base
index e74febe..8793699 100644
--- a/config/common_base
+++ b/config/common_base
@@ -608,6 +608,8 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
 #
 # Compile Mempool drivers
 #
+CONFIG_RTE_DRIVER_MEMPOOL_BUCKET=y
+CONFIG_RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB=32
 CONFIG_RTE_DRIVER_MEMPOOL_RING=y
 CONFIG_RTE_DRIVER_MEMPOOL_STACK=y
 
diff --git a/drivers/mempool/Makefile b/drivers/mempool/Makefile
index f656c56..9de0783 100644
--- a/drivers/mempool/Makefile
+++ b/drivers/mempool/Makefile
@@ -30,6 +30,7 @@
 
 include $(RTE_SDK)/mk/rte.vars.mk
 
+DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_BUCKET) += bucket
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL) += dpaa
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL) += dpaa2
 DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += ring
diff --git a/drivers/mempool/bucket/Makefile b/drivers/mempool/bucket/Makefile
new file mode 100644
index 0000000..06ddd31
--- /dev/null
+++ b/drivers/mempool/bucket/Makefile
@@ -0,0 +1,49 @@
+#
+#   BSD LICENSE
+#
+# Copyright (c) 2017 Solarflare Communications Inc.
+# All rights reserved.
+#
+# This software was jointly developed between OKTET Labs (under contract
+# for Solarflare) and Solarflare Communications, Inc.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are met:
+#
+# 1. Redistributions of source code must retain the above copyright notice,
+#    this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright notice,
+#    this list of conditions and the following disclaimer in the documentation
+#    and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
+# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+# OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+# WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+# OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
+# EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_mempool_bucket.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+LDLIBS += -lrte_eal -lrte_mempool -lrte_ring
+
+EXPORT_MAP := rte_mempool_bucket_version.map
+
+LIBABIVER := 1
+
+SRCS-$(CONFIG_RTE_DRIVER_MEMPOOL_BUCKET) += rte_mempool_bucket.c
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c
new file mode 100644
index 0000000..4063d2c
--- /dev/null
+++ b/drivers/mempool/bucket/rte_mempool_bucket.c
@@ -0,0 +1,521 @@
+/*-
+ *   BSD LICENSE
+ *
+ * Copyright (c) 2017 Solarflare Communications Inc.
+ * All rights reserved.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ *    this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ *    this list of conditions and the following disclaimer in the documentation
+ *    and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
+ * EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdbool.h>
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+
+/*
+ * The general idea of the bucket mempool driver is as follows.
+ * We keep track of physically contiguous groups (buckets) of objects
+ * of a certain size. Every such a group has a counter that is
+ * incremented every time an object from that group is enqueued.
+ * Until the bucket is full, no objects from it are eligible for allocation.
+ * If a request is made to dequeue a multiply of bucket size, it is
+ * satisfied by returning the whole buckets, instead of separate objects.
+ */
+
+#define BUCKET_MEM_SIZE		(RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB * 1024)
+
+struct bucket_header {
+	unsigned int lcore_id;
+	uint8_t fill_cnt;
+};
+
+struct bucket_stack {
+	unsigned int top;
+	unsigned int limit;
+	void *objects[];
+};
+
+struct bucket_data {
+	unsigned int header_size;
+	unsigned int chunk_size;
+	unsigned int bucket_size;
+	uintptr_t bucket_page_mask;
+	struct rte_ring *shared_bucket_ring;
+	struct bucket_stack *buckets[RTE_MAX_LCORE];
+	/*
+	 * Multi-producer single-consumer ring to hold objects that are
+	 * returned to the mempool at a different lcore than initially
+	 * dequeued
+	 */
+	struct rte_ring *adoption_buffer_rings[RTE_MAX_LCORE];
+	struct rte_ring *shared_orphan_ring;
+	struct rte_mempool *pool;
+
+};
+
+static struct bucket_stack *
+bucket_stack_create(const struct rte_mempool *mp, unsigned int n_elts)
+{
+	struct bucket_stack *stack;
+
+	stack = rte_zmalloc_socket("bucket_stack",
+				   sizeof(struct bucket_stack) +
+				   n_elts * sizeof(void *),
+				   RTE_CACHE_LINE_SIZE,
+				   mp->socket_id);
+	if (stack == NULL)
+		return NULL;
+	stack->limit = n_elts;
+	stack->top = 0;
+
+	return stack;
+}
+
+static void
+bucket_stack_push(struct bucket_stack *stack, void *obj)
+{
+	RTE_ASSERT(stack->top < stack->limit);
+	stack->objects[stack->top++] = obj;
+}
+
+static void *
+bucket_stack_pop_unsafe(struct bucket_stack *stack)
+{
+	RTE_ASSERT(stack->top > 0);
+	return stack->objects[--stack->top];
+}
+
+static void *
+bucket_stack_pop(struct bucket_stack *stack)
+{
+	if (stack->top == 0)
+		return NULL;
+	return bucket_stack_pop_unsafe(stack);
+}
+
+static int
+bucket_enqueue_single(struct bucket_data *data, void *obj)
+{
+	int rc = 0;
+	uintptr_t addr = (uintptr_t)obj;
+	struct bucket_header *hdr;
+	unsigned int lcore_id = rte_lcore_id();
+
+	addr &= data->bucket_page_mask;
+	hdr = (struct bucket_header *)addr;
+
+	if (likely(hdr->lcore_id == lcore_id)) {
+		if (hdr->fill_cnt < data->bucket_size - 1) {
+			hdr->fill_cnt++;
+		} else {
+			hdr->fill_cnt = 0;
+			/* Stack is big enough to put all buckets */
+			bucket_stack_push(data->buckets[lcore_id], hdr);
+		}
+	} else if (hdr->lcore_id != LCORE_ID_ANY) {
+		struct rte_ring *adopt_ring =
+			data->adoption_buffer_rings[hdr->lcore_id];
+
+		rc = rte_ring_enqueue(adopt_ring, obj);
+		/* Ring is big enough to put all objects */
+		RTE_ASSERT(rc == 0);
+	} else if (hdr->fill_cnt < data->bucket_size - 1) {
+		hdr->fill_cnt++;
+	} else {
+		hdr->fill_cnt = 0;
+		rc = rte_ring_enqueue(data->shared_bucket_ring, hdr);
+		/* Ring is big enough to put all buckets */
+		RTE_ASSERT(rc == 0);
+	}
+
+	return rc;
+}
+
+static int
+bucket_enqueue(struct rte_mempool *mp, void * const *obj_table,
+	       unsigned int n)
+{
+	struct bucket_data *data = mp->pool_data;
+	unsigned int i;
+	int rc = 0;
+
+	for (i = 0; i < n; i++) {
+		rc = bucket_enqueue_single(data, obj_table[i]);
+		RTE_ASSERT(rc == 0);
+	}
+	return rc;
+}
+
+static void **
+bucket_fill_obj_table(const struct bucket_data *data, void **pstart,
+		      void **obj_table, unsigned int n)
+{
+	unsigned int i;
+	uint8_t *objptr = *pstart;
+
+	for (objptr += data->header_size, i = 0; i < n; i++,
+		     objptr += data->chunk_size)
+		*obj_table++ = objptr;
+	*pstart = objptr;
+	return obj_table;
+}
+
+static int
+bucket_dequeue_orphans(struct bucket_data *data, void **obj_table,
+		       unsigned int n_orphans)
+{
+	unsigned int i;
+	int rc;
+	uint8_t *objptr;
+
+	rc = rte_ring_dequeue_bulk(data->shared_orphan_ring, obj_table,
+				   n_orphans, NULL);
+	if (unlikely(rc != (int)n_orphans)) {
+		struct bucket_header *hdr;
+
+		objptr = bucket_stack_pop(data->buckets[rte_lcore_id()]);
+		hdr = (struct bucket_header *)objptr;
+
+		if (objptr == NULL) {
+			rc = rte_ring_dequeue(data->shared_bucket_ring,
+					      (void **)&objptr);
+			if (rc != 0) {
+				rte_errno = ENOBUFS;
+				return -rte_errno;
+			}
+			hdr = (struct bucket_header *)objptr;
+			hdr->lcore_id = rte_lcore_id();
+		}
+		hdr->fill_cnt = 0;
+		bucket_fill_obj_table(data, (void **)&objptr, obj_table,
+				      n_orphans);
+		for (i = n_orphans; i < data->bucket_size; i++,
+			     objptr += data->chunk_size) {
+			rc = rte_ring_enqueue(data->shared_orphan_ring,
+					      objptr);
+			if (rc != 0) {
+				RTE_ASSERT(0);
+				rte_errno = -rc;
+				return rc;
+			}
+		}
+	}
+
+	return 0;
+}
+
+static int
+bucket_dequeue_buckets(struct bucket_data *data, void **obj_table,
+		       unsigned int n_buckets)
+{
+	struct bucket_stack *cur_stack = data->buckets[rte_lcore_id()];
+	unsigned int n_buckets_from_stack = RTE_MIN(n_buckets, cur_stack->top);
+	void **obj_table_base = obj_table;
+
+	n_buckets -= n_buckets_from_stack;
+	while (n_buckets_from_stack-- > 0) {
+		void *obj = bucket_stack_pop_unsafe(cur_stack);
+
+		obj_table = bucket_fill_obj_table(data, &obj, obj_table,
+						  data->bucket_size);
+	}
+	while (n_buckets-- > 0) {
+		struct bucket_header *hdr;
+
+		if (unlikely(rte_ring_dequeue(data->shared_bucket_ring,
+					      (void **)&hdr) != 0)) {
+			/* Return the already-dequeued buffers
+			 * back to the mempool
+			 */
+			bucket_enqueue(data->pool, obj_table_base,
+				       obj_table - obj_table_base);
+			rte_errno = ENOBUFS;
+			return -rte_errno;
+		}
+		hdr->lcore_id = rte_lcore_id();
+		obj_table = bucket_fill_obj_table(data, (void **)&hdr,
+						  obj_table, data->bucket_size);
+	}
+
+	return 0;
+}
+
+static int
+bucket_adopt_orphans(struct bucket_data *data)
+{
+	int rc = 0;
+	struct rte_ring *adopt_ring =
+		data->adoption_buffer_rings[rte_lcore_id()];
+
+	if (unlikely(!rte_ring_empty(adopt_ring))) {
+		void *orphan;
+
+		while (rte_ring_sc_dequeue(adopt_ring, &orphan) == 0) {
+			rc = bucket_enqueue_single(data, orphan);
+			RTE_ASSERT(rc == 0);
+		}
+	}
+	return rc;
+}
+
+static int
+bucket_dequeue(struct rte_mempool *mp, void **obj_table, unsigned int n)
+{
+	struct bucket_data *data = mp->pool_data;
+	unsigned int n_buckets = n / data->bucket_size;
+	unsigned int n_orphans = n - n_buckets * data->bucket_size;
+	int rc = 0;
+
+	bucket_adopt_orphans(data);
+
+	if (unlikely(n_orphans > 0)) {
+		rc = bucket_dequeue_orphans(data, obj_table +
+					    (n_buckets * data->bucket_size),
+					    n_orphans);
+		if (rc != 0)
+			return rc;
+	}
+
+	if (likely(n_buckets > 0)) {
+		rc = bucket_dequeue_buckets(data, obj_table, n_buckets);
+		if (unlikely(rc != 0) && n_orphans > 0) {
+			rte_ring_enqueue_bulk(data->shared_orphan_ring,
+					      obj_table + (n_buckets *
+							   data->bucket_size),
+					      n_orphans, NULL);
+		}
+	}
+
+	return rc;
+}
+
+static unsigned int
+bucket_get_count(const struct rte_mempool *mp)
+{
+	const struct bucket_data *data = mp->pool_data;
+	const struct bucket_stack *local_bucket_stack =
+		data->buckets[rte_lcore_id()];
+
+	return data->bucket_size * local_bucket_stack->top +
+		data->bucket_size * rte_ring_count(data->shared_bucket_ring) +
+		rte_ring_count(data->shared_orphan_ring);
+}
+
+static int
+bucket_alloc(struct rte_mempool *mp)
+{
+	int rg_flags = 0;
+	int rc = 0;
+	char rg_name[RTE_RING_NAMESIZE];
+	struct bucket_data *data;
+	unsigned int i;
+
+	data = rte_zmalloc_socket("bucket_pool", sizeof(*data),
+				  RTE_CACHE_LINE_SIZE, mp->socket_id);
+	if (data == NULL) {
+		rc = -ENOMEM;
+		goto no_mem_for_data;
+	}
+	data->pool = mp;
+	data->header_size = mp->header_size;
+	RTE_VERIFY(sizeof(struct bucket_header) +
+		   sizeof(struct rte_mempool_objhdr) <= mp->header_size);
+	data->chunk_size = mp->header_size + mp->elt_size + mp->trailer_size;
+	data->bucket_size = BUCKET_MEM_SIZE / data->chunk_size;
+	data->bucket_page_mask = ~(rte_align64pow2(BUCKET_MEM_SIZE) - 1);
+
+	if (mp->flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (mp->flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		if (!rte_lcore_is_enabled(i))
+			continue;
+		data->buckets[i] =
+			bucket_stack_create(mp, mp->size / data->bucket_size);
+		if (data->buckets[i] == NULL) {
+			rc = -ENOMEM;
+			goto no_mem_for_stacks;
+		}
+		rc = snprintf(rg_name, sizeof(rg_name),
+			      RTE_MEMPOOL_MZ_FORMAT ".a%u", mp->name, i);
+		if (rc < 0 || rc >= (int)sizeof(rg_name)) {
+			rc = -ENAMETOOLONG;
+			goto no_mem_for_stacks;
+		}
+		data->adoption_buffer_rings[i] =
+			rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+					mp->socket_id,
+					rg_flags | RING_F_SC_DEQ);
+		if (data->adoption_buffer_rings[i] == NULL) {
+			rc = -rte_errno;
+			goto no_mem_for_stacks;
+		}
+	}
+
+	rc = snprintf(rg_name, sizeof(rg_name),
+		      RTE_MEMPOOL_MZ_FORMAT ".0", mp->name);
+	if (rc < 0 || rc >= (int)sizeof(rg_name)) {
+		rc = -ENAMETOOLONG;
+		goto invalid_shared_orphan_ring;
+	}
+	data->shared_orphan_ring =
+		rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+				mp->socket_id, rg_flags);
+	if (data->shared_orphan_ring == NULL) {
+		rc = -rte_errno;
+		goto cannot_create_shared_orphan_ring;
+	}
+
+	rc = snprintf(rg_name, sizeof(rg_name),
+		       RTE_MEMPOOL_MZ_FORMAT ".1", mp->name);
+	if (rc < 0 || rc >= (int)sizeof(rg_name)) {
+		rc = -ENAMETOOLONG;
+		goto invalid_shared_bucket_ring;
+	}
+	data->shared_bucket_ring =
+		rte_ring_create(rg_name,
+				rte_align32pow2((mp->size /
+						 data->bucket_size) + 1),
+				mp->socket_id, rg_flags);
+	if (data->shared_bucket_ring == NULL) {
+		rc = -rte_errno;
+		goto cannot_create_shared_bucket_ring;
+	}
+
+	mp->pool_data = data;
+
+	return 0;
+
+cannot_create_shared_bucket_ring:
+invalid_shared_bucket_ring:
+	rte_ring_free(data->shared_orphan_ring);
+cannot_create_shared_orphan_ring:
+invalid_shared_orphan_ring:
+no_mem_for_stacks:
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		rte_free(data->buckets[i]);
+		rte_ring_free(data->adoption_buffer_rings[i]);
+	}
+	rte_free(data);
+no_mem_for_data:
+	rte_errno = -rc;
+	return rc;
+}
+
+static void
+bucket_free(struct rte_mempool *mp)
+{
+	unsigned int i;
+	struct bucket_data *data = mp->pool_data;
+
+	if (data == NULL)
+		return;
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		rte_free(data->buckets[i]);
+		rte_ring_free(data->adoption_buffer_rings[i]);
+	}
+
+	rte_ring_free(data->shared_orphan_ring);
+	rte_ring_free(data->shared_bucket_ring);
+
+	rte_free(data);
+}
+
+static int
+bucket_get_capabilities(__rte_unused const struct rte_mempool *mp,
+			unsigned int *flags)
+{
+	*flags |= MEMPOOL_F_CAPA_PHYS_CONTIG |
+		MEMPOOL_F_CAPA_ALLOCATE_IN_CLUSTERS;
+	return 0;
+}
+
+static int
+bucket_get_info(__rte_unused const struct rte_mempool *mp,
+		struct rte_mempool_info *info)
+{
+	/* mp->pool_data may be still uninitialized at this point */
+	unsigned int chunk_size = mp->header_size + mp->elt_size +
+		mp->trailer_size;
+
+	info->cluster_size = BUCKET_MEM_SIZE / chunk_size;
+	return 0;
+}
+
+static int
+bucket_register_memory_area(__rte_unused const struct rte_mempool *mp,
+			    char *vaddr, __rte_unused phys_addr_t paddr,
+			    size_t len)
+{
+	/* mp->pool_data may be still uninitialized at this point */
+	unsigned int chunk_size = mp->header_size + mp->elt_size +
+		mp->trailer_size;
+	unsigned int bucket_mem_size =
+		(BUCKET_MEM_SIZE / chunk_size) * chunk_size;
+	unsigned int bucket_page_sz = rte_align32pow2(bucket_mem_size);
+	uintptr_t align;
+	char *iter;
+
+	align = RTE_PTR_ALIGN_CEIL(vaddr, bucket_page_sz) - vaddr;
+
+	for (iter = vaddr + align; iter < vaddr + len; iter += bucket_page_sz) {
+		/* librte_mempool uses the header part for its own bookkeeping,
+		 * but the librte_mempool's object header is adjacent to the
+		 * data; it is small enough and the header is guaranteed to be
+		 * at least CACHE_LINE_SIZE (i.e. 64) bytes, so we do have
+		 * plenty of space at the start of the header. So the layout
+		 * looks like this:
+		 * [bucket_header] ... unused ... [rte_mempool_objhdr] [data...]
+		 */
+		struct bucket_header *hdr = (struct bucket_header *)iter;
+
+		hdr->fill_cnt = 0;
+		hdr->lcore_id = LCORE_ID_ANY;
+	}
+
+	return 0;
+}
+
+static const struct rte_mempool_ops ops_bucket = {
+	.name = "bucket",
+	.alloc = bucket_alloc,
+	.free = bucket_free,
+	.enqueue = bucket_enqueue,
+	.dequeue = bucket_dequeue,
+	.get_count = bucket_get_count,
+	.get_capabilities = bucket_get_capabilities,
+	.register_memory_area = bucket_register_memory_area,
+	.get_info = bucket_get_info,
+};
+
+
+MEMPOOL_REGISTER_OPS(ops_bucket);
diff --git a/drivers/mempool/bucket/rte_mempool_bucket_version.map b/drivers/mempool/bucket/rte_mempool_bucket_version.map
new file mode 100644
index 0000000..179140f
--- /dev/null
+++ b/drivers/mempool/bucket/rte_mempool_bucket_version.map
@@ -0,0 +1,4 @@
+DPDK_18.02 {
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 6a6a745..d99181f 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -115,6 +115,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_VDEV_BUS)       += -lrte_bus_vdev
 ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),n)
 # plugins (link only if static libraries)
 
+_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_BUCKET) += -lrte_mempool_bucket
 _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_STACK)  += -lrte_mempool_stack
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [RFC PATCH 4/6] mempool: add a function to flush default cache
  2017-11-24 16:06 [RFC PATCH 0/6] mempool: add bucket mempool driver Andrew Rybchenko
                   ` (2 preceding siblings ...)
  2017-11-24 16:06 ` [RFC PATCH 3/6] mempool/bucket: implement bucket mempool manager Andrew Rybchenko
@ 2017-11-24 16:06 ` Andrew Rybchenko
  2017-12-14 13:38   ` Olivier MATZ
  2017-11-24 16:06 ` [RFC PATCH 5/6] mempool: support block dequeue operation Andrew Rybchenko
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2017-11-24 16:06 UTC (permalink / raw)
  To: dev; +Cc: Olivier Matz, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Mempool get/put API cares about cache itself, but sometimes it is
required to flush the cache explicitly.

Also dedicated API allows to decouple it from block get API (to be
added) and provides more fine-grained control.

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 lib/librte_mempool/rte_mempool.h | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 9bcb8b7..3a52b93 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -1161,6 +1161,22 @@ rte_mempool_default_cache(struct rte_mempool *mp, unsigned lcore_id)
 }
 
 /**
+ * Ensure that a default per-lcore mempool cache is flushed, if it is present
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ */
+static __rte_always_inline void
+rte_mempool_ensure_cache_flushed(struct rte_mempool *mp)
+{
+	struct rte_mempool_cache *cache;
+	cache = rte_mempool_default_cache(mp, rte_lcore_id());
+	if (cache != NULL && cache->len > 0)
+		rte_mempool_cache_flush(cache, mp);
+}
+
+
+/**
  * @internal Put several objects back in the mempool; used internally.
  * @param mp
  *   A pointer to the mempool structure.
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [RFC PATCH 5/6] mempool: support block dequeue operation
  2017-11-24 16:06 [RFC PATCH 0/6] mempool: add bucket mempool driver Andrew Rybchenko
                   ` (3 preceding siblings ...)
  2017-11-24 16:06 ` [RFC PATCH 4/6] mempool: add a function to flush default cache Andrew Rybchenko
@ 2017-11-24 16:06 ` Andrew Rybchenko
  2017-12-14 13:38   ` Olivier MATZ
  2017-11-24 16:06 ` [RFC PATCH 6/6] mempool/bucket: implement " Andrew Rybchenko
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2017-11-24 16:06 UTC (permalink / raw)
  To: dev; +Cc: Olivier Matz, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

If mempool manager supports object blocks (physically and virtual
contiguous set of objects), it is sufficient to get the first
object only and the function allows to avoid filling in of
information about each block member.

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 lib/librte_mempool/rte_mempool.c     |   4 +-
 lib/librte_mempool/rte_mempool.h     | 111 +++++++++++++++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c |   1 +
 3 files changed, 115 insertions(+), 1 deletion(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 43455a3..6850d6e 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -603,8 +603,10 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	ret = rte_mempool_ops_get_info(mp, &mp->info);
 	if ((ret < 0) && (ret != -ENOTSUP))
 		return ret;
-	if (ret == -ENOTSUP)
+	if (ret == -ENOTSUP) {
 		mp->info.cluster_size = 0;
+		mp->info.contig_block_size = 0;
+	}
 
 	if ((mp->info.cluster_size == 0) &&
 	    (mp_flags & MEMPOOL_F_CAPA_ALLOCATE_IN_CLUSTERS))
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 3a52b93..4575eb2 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -223,6 +223,8 @@ struct rte_mempool_memhdr {
 struct rte_mempool_info {
 	/** Number of objects in a cluster */
 	unsigned int cluster_size;
+	/** Number of objects in the contiguous block */
+	unsigned int contig_block_size;
 };
 
 /**
@@ -431,6 +433,12 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
 		void **obj_table, unsigned int n);
 
 /**
+ * Dequeue a number of contiquous object blocks from the external pool.
+ */
+typedef int (*rte_mempool_dequeue_contig_blocks_t)(struct rte_mempool *mp,
+		 void **first_obj_table, unsigned int n);
+
+/**
  * Return the number of available objects in the external pool.
  */
 typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
@@ -473,6 +481,10 @@ struct rte_mempool_ops {
 	 * Get mempool info
 	 */
 	rte_mempool_get_info_t get_info;
+	/**
+	 * Dequeue a number of contiguous object blocks.
+	 */
+	rte_mempool_dequeue_contig_blocks_t dequeue_contig_blocks;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -551,6 +563,30 @@ rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
 }
 
 /**
+ * @internal Wrapper for mempool_ops dequeue_contig_blocks callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param first_obj_table
+ *   Pointer to a table of void * pointers (first objects).
+ * @param n
+ *   Number of blocks to get.
+ * @return
+ *   - 0: Success; got n objects.
+ *   - <0: Error; code of dequeue function.
+ */
+static inline int
+rte_mempool_ops_dequeue_contig_blocks(struct rte_mempool *mp,
+		void **first_obj_table, unsigned int n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+	RTE_ASSERT(ops->dequeue_contig_blocks != NULL);
+	return ops->dequeue_contig_blocks(mp, first_obj_table, n);
+}
+
+/**
  * @internal wrapper for mempool_ops enqueue callback.
  *
  * @param mp
@@ -1456,6 +1492,81 @@ rte_mempool_get(struct rte_mempool *mp, void **obj_p)
 }
 
 /**
+ * @internal Get contiguous blocks of objects from the pool. Used internally.
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param first_obj_table
+ *   A pointer to a pointer to the first object in each block.
+ * @param n
+ *   A number of blocks to get.
+ * @return
+ *   - >0: Success
+ *   - <0: Error
+ */
+static __rte_always_inline int
+__mempool_generic_get_contig_blocks(struct rte_mempool *mp,
+				    void **first_obj_table, unsigned int n)
+{
+	int ret;
+
+	ret = rte_mempool_ops_dequeue_contig_blocks(mp, first_obj_table, n);
+	if (ret < 0)
+		__MEMPOOL_STAT_ADD(mp, get_fail,
+				   n * mp->info.contig_block_size);
+	else
+		__MEMPOOL_STAT_ADD(mp, get_success,
+				   n * mp->info.contig_block_size);
+
+	return ret;
+}
+
+/**
+ * Get a contiguous blocks of objects from the mempool.
+ *
+ * If cache is enabled, consider to flush it first, to reuse objects
+ * as soon as possible.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param first_obj_table
+ *   A pointer to a pointer to the first object in each block.
+ * @param n
+ *   The number of blocks to get from mempool.
+ * @return
+ *   - >0: the size of the block
+ *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
+ *   - -EOPNOTSUPP: The mempool driver does not support block dequeue
+ */
+static __rte_always_inline int
+rte_mempool_get_contig_blocks(struct rte_mempool *mp,
+			      void **first_obj_table, unsigned int n)
+{
+	int ret;
+
+	ret = __mempool_generic_get_contig_blocks(mp, first_obj_table, n);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	if (ret == 0) {
+		const size_t total_elt_sz =
+			mp->header_size + mp->elt_size + mp->trailer_size;
+		unsigned int i, j;
+
+		for (i = 0; i < n; ++i) {
+			void *first_obj = first_obj_table[i];
+
+			for (j = 0; j < mp->info.contig_block_size; ++j) {
+				void *obj;
+
+				obj = (void *)((uintptr_t)first_obj +
+					       j * total_elt_sz);
+				rte_mempool_check_cookies(mp, &obj, 1, 1);
+			}
+		}
+	}
+#endif
+	return ret;
+}
+
+/**
  * Return the number of entries in the mempool.
  *
  * When cache is enabled, this function has to browse the length of
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 23de4db..cc38761 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -89,6 +89,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->get_capabilities = h->get_capabilities;
 	ops->register_memory_area = h->register_memory_area;
 	ops->get_info = h->get_info;
+	ops->dequeue_contig_blocks = h->dequeue_contig_blocks;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [RFC PATCH 6/6] mempool/bucket: implement block dequeue operation
  2017-11-24 16:06 [RFC PATCH 0/6] mempool: add bucket mempool driver Andrew Rybchenko
                   ` (4 preceding siblings ...)
  2017-11-24 16:06 ` [RFC PATCH 5/6] mempool: support block dequeue operation Andrew Rybchenko
@ 2017-11-24 16:06 ` Andrew Rybchenko
  2017-12-14 13:36 ` [RFC PATCH 0/6] mempool: add bucket mempool driver Olivier MATZ
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2017-11-24 16:06 UTC (permalink / raw)
  To: dev; +Cc: Olivier Matz, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/mempool/bucket/rte_mempool_bucket.c | 38 +++++++++++++++++++++++++++++
 1 file changed, 38 insertions(+)

diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c
index 4063d2c..ee5a6cf 100644
--- a/drivers/mempool/bucket/rte_mempool_bucket.c
+++ b/drivers/mempool/bucket/rte_mempool_bucket.c
@@ -315,6 +315,42 @@ bucket_dequeue(struct rte_mempool *mp, void **obj_table, unsigned int n)
 	return rc;
 }
 
+static int
+bucket_dequeue_contig_blocks(struct rte_mempool *mp, void **first_obj_table,
+			     unsigned int n)
+{
+	struct bucket_data *data = mp->pool_data;
+	const uint32_t header_size = data->header_size;
+	struct bucket_stack *cur_stack = data->buckets[rte_lcore_id()];
+	unsigned int n_buckets_from_stack = RTE_MIN(n, cur_stack->top);
+	struct bucket_header *hdr;
+	void **first_objp = first_obj_table;
+
+	bucket_adopt_orphans(data);
+
+	n -= n_buckets_from_stack;
+	while (n_buckets_from_stack-- > 0) {
+		hdr = bucket_stack_pop_unsafe(cur_stack);
+		*first_objp++ = (uint8_t *)hdr + header_size;
+	}
+	while (n-- > 0) {
+		if (unlikely(rte_ring_dequeue(data->shared_bucket_ring,
+					      (void **)&hdr) != 0)) {
+			/* Return the already dequeued buckets */
+			while (first_objp-- != first_obj_table) {
+				bucket_stack_push(cur_stack,
+					(uint8_t *)*first_objp - header_size);
+			}
+			rte_errno = ENOBUFS;
+			return -rte_errno;
+		}
+		hdr->lcore_id = rte_lcore_id();
+		*first_objp++ = (uint8_t *)hdr + header_size;
+	}
+
+	return 0;
+}
+
 static unsigned int
 bucket_get_count(const struct rte_mempool *mp)
 {
@@ -468,6 +504,7 @@ bucket_get_info(__rte_unused const struct rte_mempool *mp,
 		mp->trailer_size;
 
 	info->cluster_size = BUCKET_MEM_SIZE / chunk_size;
+	info->contig_block_size = info->cluster_size;
 	return 0;
 }
 
@@ -515,6 +552,7 @@ static const struct rte_mempool_ops ops_bucket = {
 	.get_capabilities = bucket_get_capabilities,
 	.register_memory_area = bucket_register_memory_area,
 	.get_info = bucket_get_info,
+	.dequeue_contig_blocks = bucket_dequeue_contig_blocks,
 };
 
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* Re: [RFC PATCH 0/6] mempool: add bucket mempool driver
  2017-11-24 16:06 [RFC PATCH 0/6] mempool: add bucket mempool driver Andrew Rybchenko
                   ` (5 preceding siblings ...)
  2017-11-24 16:06 ` [RFC PATCH 6/6] mempool/bucket: implement " Andrew Rybchenko
@ 2017-12-14 13:36 ` Olivier MATZ
  2018-01-17 15:03   ` Andrew Rybchenko
  2018-01-23 13:15 ` [RFC v2 00/17] " Andrew Rybchenko
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 197+ messages in thread
From: Olivier MATZ @ 2017-12-14 13:36 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev

Hi Andrew,

Please find some comments about this patchset below.
I'll also send some comments as replies to the specific patch.

On Fri, Nov 24, 2017 at 04:06:25PM +0000, Andrew Rybchenko wrote:
> The patch series adds bucket mempool driver which allows to allocate
> (both physically and virtually) contiguous blocks of objects and adds
> mempool API to do it. It is still capable to provide separate objects,
> but it is definitely more heavy-weight than ring/stack drivers.
>
> The target usecase is dequeue in blocks and enqueue separate objects
> back (which are collected in buckets to be dequeued). So, the memory
> pool with bucket driver is created by an application and provided to
> networking PMD receive queue. The choice of bucket driver is done using
> rte_eth_dev_pool_ops_supported(). A PMD that relies upon contiguous
> block allocation should report the bucket driver as the only supported
> and preferred one.

So, you are planning to use this driver for a future/existing PMD?

Do you have numbers about the performance gain, in which conditions,
etc... ? And are there conditions where there is a performance loss ?

> The number of objects in the contiguous block is a function of bucket
> memory size (.config option) and total element size.

The size of the bucket memory is hardcoded to 32KB.
Why this value ?
Won't that be an issue if the user wants to use larger objects?

> As I understand it breaks ABI so it requires 3 acks in accordance with
> policy, deprecation notice and mempool shared library version bump.
> If there is a way to avoid ABI breakage, please, let us know.

If my understanding is correct, the ABI breakage is caused by the
addition of the new block dequeue operation, right?


Thanks
Olivier

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC PATCH 1/6] mempool: implement abstract mempool info API
  2017-11-24 16:06 ` [RFC PATCH 1/6] mempool: implement abstract mempool info API Andrew Rybchenko
@ 2017-12-14 13:36   ` Olivier MATZ
  2018-01-17 15:03     ` Andrew Rybchenko
  0 siblings, 1 reply; 197+ messages in thread
From: Olivier MATZ @ 2017-12-14 13:36 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev, Artem V. Andreev

On Fri, Nov 24, 2017 at 04:06:26PM +0000, Andrew Rybchenko wrote:
> From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>
> 
> Primarily, it is intended as a way for the mempool driver to provide
> additional information on how it lays up objects inside the mempool.
> 
> Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> ---
>  lib/librte_mempool/rte_mempool.h     | 31 +++++++++++++++++++++++++++++++
>  lib/librte_mempool/rte_mempool_ops.c | 15 +++++++++++++++
>  2 files changed, 46 insertions(+)
> 
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index 721227f..3c59d36 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -217,6 +217,11 @@ struct rte_mempool_memhdr {
>  	void *opaque;            /**< Argument passed to the free callback */
>  };
>  
> +/*
> + * Additional information about the mempool
> + */
> +struct rte_mempool_info;
> +

While there is no compilation issue, I find a bit strange to define this
API without defining the content of rte_mempool_info.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC PATCH 2/6] mempool: implement clustered object allocation
  2017-11-24 16:06 ` [RFC PATCH 2/6] mempool: implement clustered object allocation Andrew Rybchenko
@ 2017-12-14 13:37   ` Olivier MATZ
  2018-01-17 15:03     ` Andrew Rybchenko
  0 siblings, 1 reply; 197+ messages in thread
From: Olivier MATZ @ 2017-12-14 13:37 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev, Artem V. Andreev, Santosh Shukla

On Fri, Nov 24, 2017 at 04:06:27PM +0000, Andrew Rybchenko wrote:
> From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>
> 
> Clustered allocation is required to simplify packaging objects into
> buckets and search of the bucket control structure by an object.
> 
> Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> ---
>  lib/librte_mempool/rte_mempool.c | 39 +++++++++++++++++++++++++++++++++++----
>  lib/librte_mempool/rte_mempool.h | 23 +++++++++++++++++++++--
>  test/test/test_mempool.c         |  2 +-
>  3 files changed, 57 insertions(+), 7 deletions(-)
> 
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index d50dba4..43455a3 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -239,7 +239,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
>   */
>  size_t
>  rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
> -		      unsigned int flags)
> +		      unsigned int flags,
> +		      const struct rte_mempool_info *info)
>  {
>  	size_t obj_per_page, pg_num, pg_sz;
>  	unsigned int mask;
> @@ -252,6 +253,17 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
>  	if (total_elt_sz == 0)
>  		return 0;
>  
> +	if (flags & MEMPOOL_F_CAPA_ALLOCATE_IN_CLUSTERS) {
> +		unsigned int align_shift =
> +			rte_bsf32(
> +				rte_align32pow2(total_elt_sz *
> +						info->cluster_size));
> +		if (pg_shift < align_shift) {
> +			return ((elt_num / info->cluster_size) + 2)
> +				<< align_shift;
> +		}
> +	}
> +

+Cc Santosh for this

To be honnest, that was my fear when introducing
MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS and MEMPOOL_F_CAPA_PHYS_CONTIG to see more
and more specific flags in generic code.

I feel that the hidden meaning of these flags is more "if driver == foo",
which shows that something is wrong is the current design.

We have to think about another way to do. Let me try to propose
something (to be deepen).

The standard way to create a mempool is:

  mp = create_empty(...)
  set_ops_by_name(mp, "my-driver")    // optional
  populate_default(mp)                // or populate_*()
  obj_iter(mp, callback, arg)         // optional, to init objects
  // and optional local func to init mempool priv

First, we can consider deprecating some APIs like:
 - rte_mempool_xmem_create()
 - rte_mempool_xmem_size()
 - rte_mempool_xmem_usage()
 - rte_mempool_populate_iova_tab()

These functions were introduced for xen, which was recently
removed. They are complex to use, and are not used anywhere else in
DPDK.

Then, instead of having flags (quite hard to understand without knowing
the underlying driver), we can let the mempool drivers do the
populate_default() operation. For that we can add a populate_default
field in mempool ops. Same for populate_virt(), populate_anon(), and
populate_phys() which can return -ENOTSUP if this is not
implemented/implementable on a specific driver, or if flags
(NO_CACHE_ALIGN, NO_SPREAD, ...) are not supported. If the function
pointer is NULL, use the generic function.

Thanks to this, the generic code would remain understandable and won't
have to care about how memory should be allocated for a specific driver.

Thoughts?


[...]

> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index 3c59d36..9bcb8b7 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -220,7 +220,10 @@ struct rte_mempool_memhdr {
>  /*
>   * Additional information about the mempool
>   */
> -struct rte_mempool_info;
> +struct rte_mempool_info {
> +	/** Number of objects in a cluster */
> +	unsigned int cluster_size;
> +};

I think what I'm proposing would also prevent to introduce this
structure, which is generic but only applies to this driver.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC PATCH 3/6] mempool/bucket: implement bucket mempool manager
  2017-11-24 16:06 ` [RFC PATCH 3/6] mempool/bucket: implement bucket mempool manager Andrew Rybchenko
@ 2017-12-14 13:38   ` Olivier MATZ
  2018-01-17 15:06     ` Andrew Rybchenko
  0 siblings, 1 reply; 197+ messages in thread
From: Olivier MATZ @ 2017-12-14 13:38 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev, Artem V. Andreev

On Fri, Nov 24, 2017 at 04:06:28PM +0000, Andrew Rybchenko wrote:
> From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>
> 
> The manager provides a way to allocate physically and virtually
> contiguous set of objects.
> 
> Note: due to the way objects are organized in the bucket manager,
> the get_avail_count may return less objects than were enqueued.
> That breaks the expectation of mempool and mempool_perf tests.

To me, this can be problematic. The driver should respect the
API, or it will trigger hard-to-debug issues in applications. Can't
this be fixed in some way or another?

[...]

> --- a/config/common_base
> +++ b/config/common_base
> @@ -608,6 +608,8 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
>  #
>  # Compile Mempool drivers
>  #
> +CONFIG_RTE_DRIVER_MEMPOOL_BUCKET=y
> +CONFIG_RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB=32
>  CONFIG_RTE_DRIVER_MEMPOOL_RING=y
>  CONFIG_RTE_DRIVER_MEMPOOL_STACK=y
>  

Why 32KB?
Why not more, or less?
Can it be a runtime parameter?
I guess it won't work with too large objects.

[...]

> +struct bucket_data {
> +	unsigned int header_size;
> +	unsigned int chunk_size;
> +	unsigned int bucket_size;
> +	uintptr_t bucket_page_mask;
> +	struct rte_ring *shared_bucket_ring;
> +	struct bucket_stack *buckets[RTE_MAX_LCORE];
> +	/*
> +	 * Multi-producer single-consumer ring to hold objects that are
> +	 * returned to the mempool at a different lcore than initially
> +	 * dequeued
> +	 */
> +	struct rte_ring *adoption_buffer_rings[RTE_MAX_LCORE];
> +	struct rte_ring *shared_orphan_ring;
> +	struct rte_mempool *pool;
> +
> +};

I'm seeing per-core structures. Will it work on non-dataplane cores?
For instance, if a control thread wants to allocate a mbuf?

If possible, these fields should be more documented (or just renamed).
For instance, I suggest chunk_size could be called obj_per_bucket, which
better described the content of the field.

[...]

> +static int
> +bucket_enqueue_single(struct bucket_data *data, void *obj)
> +{
> +	int rc = 0;
> +	uintptr_t addr = (uintptr_t)obj;
> +	struct bucket_header *hdr;
> +	unsigned int lcore_id = rte_lcore_id();
> +
> +	addr &= data->bucket_page_mask;
> +	hdr = (struct bucket_header *)addr;
> +
> +	if (likely(hdr->lcore_id == lcore_id)) {
> +		if (hdr->fill_cnt < data->bucket_size - 1) {
> +			hdr->fill_cnt++;
> +		} else {
> +			hdr->fill_cnt = 0;
> +			/* Stack is big enough to put all buckets */
> +			bucket_stack_push(data->buckets[lcore_id], hdr);
> +		}
> +	} else if (hdr->lcore_id != LCORE_ID_ANY) {
> +		struct rte_ring *adopt_ring =
> +			data->adoption_buffer_rings[hdr->lcore_id];
> +
> +		rc = rte_ring_enqueue(adopt_ring, obj);
> +		/* Ring is big enough to put all objects */
> +		RTE_ASSERT(rc == 0);
> +	} else if (hdr->fill_cnt < data->bucket_size - 1) {
> +		hdr->fill_cnt++;
> +	} else {
> +		hdr->fill_cnt = 0;
> +		rc = rte_ring_enqueue(data->shared_bucket_ring, hdr);
> +		/* Ring is big enough to put all buckets */
> +		RTE_ASSERT(rc == 0);
> +	}
> +
> +	return rc;
> +}

[...]

> +static int
> +bucket_dequeue_buckets(struct bucket_data *data, void **obj_table,
> +		       unsigned int n_buckets)
> +{
> +	struct bucket_stack *cur_stack = data->buckets[rte_lcore_id()];
> +	unsigned int n_buckets_from_stack = RTE_MIN(n_buckets, cur_stack->top);
> +	void **obj_table_base = obj_table;
> +
> +	n_buckets -= n_buckets_from_stack;
> +	while (n_buckets_from_stack-- > 0) {
> +		void *obj = bucket_stack_pop_unsafe(cur_stack);
> +
> +		obj_table = bucket_fill_obj_table(data, &obj, obj_table,
> +						  data->bucket_size);
> +	}
> +	while (n_buckets-- > 0) {
> +		struct bucket_header *hdr;
> +
> +		if (unlikely(rte_ring_dequeue(data->shared_bucket_ring,
> +					      (void **)&hdr) != 0)) {
> +			/* Return the already-dequeued buffers
> +			 * back to the mempool
> +			 */
> +			bucket_enqueue(data->pool, obj_table_base,
> +				       obj_table - obj_table_base);
> +			rte_errno = ENOBUFS;
> +			return -rte_errno;
> +		}
> +		hdr->lcore_id = rte_lcore_id();
> +		obj_table = bucket_fill_obj_table(data, (void **)&hdr,
> +						  obj_table, data->bucket_size);
> +	}
> +
> +	return 0;
> +}

[...]

> +static int
> +bucket_dequeue(struct rte_mempool *mp, void **obj_table, unsigned int n)
> +{
> +	struct bucket_data *data = mp->pool_data;
> +	unsigned int n_buckets = n / data->bucket_size;
> +	unsigned int n_orphans = n - n_buckets * data->bucket_size;
> +	int rc = 0;
> +
> +	bucket_adopt_orphans(data);
> +
> +	if (unlikely(n_orphans > 0)) {
> +		rc = bucket_dequeue_orphans(data, obj_table +
> +					    (n_buckets * data->bucket_size),
> +					    n_orphans);
> +		if (rc != 0)
> +			return rc;
> +	}
> +
> +	if (likely(n_buckets > 0)) {
> +		rc = bucket_dequeue_buckets(data, obj_table, n_buckets);
> +		if (unlikely(rc != 0) && n_orphans > 0) {
> +			rte_ring_enqueue_bulk(data->shared_orphan_ring,
> +					      obj_table + (n_buckets *
> +							   data->bucket_size),
> +					      n_orphans, NULL);
> +		}
> +	}
> +
> +	return rc;
> +}

If my understanding is correct, at initialization, all full buckets will
go to the data->shared_bucket_ring ring, with lcore_id == ANY (this is
done in register_mem).

(note: I feel 'data' is not an ideal name for bucket_data)

If the core 0 allocates all the mbufs, and then frees them all, they
will be stored in the per-core stack, with hdr->lcoreid == 0. Is it
right?

If yes, can core 1 allocate a mbuf after that?


> +static unsigned int
> +bucket_get_count(const struct rte_mempool *mp)
> +{
> +	const struct bucket_data *data = mp->pool_data;
> +	const struct bucket_stack *local_bucket_stack =
> +		data->buckets[rte_lcore_id()];
> +
> +	return data->bucket_size * local_bucket_stack->top +
> +		data->bucket_size * rte_ring_count(data->shared_bucket_ring) +
> +		rte_ring_count(data->shared_orphan_ring);
> +}

It looks that get_count only rely on the current core stack usage
and ignore the other core stacks.

[...]

> +static int
> +bucket_register_memory_area(__rte_unused const struct rte_mempool *mp,
> +			    char *vaddr, __rte_unused phys_addr_t paddr,
> +			    size_t len)
> +{
> +	/* mp->pool_data may be still uninitialized at this point */
> +	unsigned int chunk_size = mp->header_size + mp->elt_size +
> +		mp->trailer_size;
> +	unsigned int bucket_mem_size =
> +		(BUCKET_MEM_SIZE / chunk_size) * chunk_size;
> +	unsigned int bucket_page_sz = rte_align32pow2(bucket_mem_size);
> +	uintptr_t align;
> +	char *iter;
> +
> +	align = RTE_PTR_ALIGN_CEIL(vaddr, bucket_page_sz) - vaddr;
> +
> +	for (iter = vaddr + align; iter < vaddr + len; iter += bucket_page_sz) {
> +		/* librte_mempool uses the header part for its own bookkeeping,
> +		 * but the librte_mempool's object header is adjacent to the
> +		 * data; it is small enough and the header is guaranteed to be
> +		 * at least CACHE_LINE_SIZE (i.e. 64) bytes, so we do have
> +		 * plenty of space at the start of the header. So the layout
> +		 * looks like this:
> +		 * [bucket_header] ... unused ... [rte_mempool_objhdr] [data...]
> +		 */

This is not always true.
If a use creates a mempool with the NO_CACHE_ALIGN, the header will be
small, without padding.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC PATCH 4/6] mempool: add a function to flush default cache
  2017-11-24 16:06 ` [RFC PATCH 4/6] mempool: add a function to flush default cache Andrew Rybchenko
@ 2017-12-14 13:38   ` Olivier MATZ
  2018-01-17 15:07     ` Andrew Rybchenko
  0 siblings, 1 reply; 197+ messages in thread
From: Olivier MATZ @ 2017-12-14 13:38 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev, Artem V. Andreev

On Fri, Nov 24, 2017 at 04:06:29PM +0000, Andrew Rybchenko wrote:
> From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>
> 
> Mempool get/put API cares about cache itself, but sometimes it is
> required to flush the cache explicitly.

I don't disagree, but do you have some use-case in mind?


> Also dedicated API allows to decouple it from block get API (to be
> added) and provides more fine-grained control.
> 
> Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> ---
>  lib/librte_mempool/rte_mempool.h | 16 ++++++++++++++++
>  1 file changed, 16 insertions(+)
> 
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index 9bcb8b7..3a52b93 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -1161,6 +1161,22 @@ rte_mempool_default_cache(struct rte_mempool *mp, unsigned lcore_id)
>  }
>  
>  /**
> + * Ensure that a default per-lcore mempool cache is flushed, if it is present
> + *
> + * @param mp
> + *   A pointer to the mempool structure.
> + */
> +static __rte_always_inline void
> +rte_mempool_ensure_cache_flushed(struct rte_mempool *mp)
> +{
> +	struct rte_mempool_cache *cache;
> +	cache = rte_mempool_default_cache(mp, rte_lcore_id());
> +	if (cache != NULL && cache->len > 0)
> +		rte_mempool_cache_flush(cache, mp);
> +}
> +

We already have rte_mempool_cache_flush().
Why not just extending it instead of adding a new function?

I mean:

    static __rte_always_inline void
    rte_mempool_cache_flush(struct rte_mempool_cache *cache,
    			struct rte_mempool *mp)
    {
   +	if (cache == NULL)
   +		cache = rte_mempool_default_cache(mp, rte_lcore_id());
   +	if (cache == NULL || cache->len == 0)
   +		return;
    	rte_mempool_ops_enqueue_bulk(mp, cache->objs, cache->len);
    	cache->len = 0;
    }

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC PATCH 5/6] mempool: support block dequeue operation
  2017-11-24 16:06 ` [RFC PATCH 5/6] mempool: support block dequeue operation Andrew Rybchenko
@ 2017-12-14 13:38   ` Olivier MATZ
  0 siblings, 0 replies; 197+ messages in thread
From: Olivier MATZ @ 2017-12-14 13:38 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev, Artem V. Andreev

On Fri, Nov 24, 2017 at 04:06:30PM +0000, Andrew Rybchenko wrote:
> From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>
> 
> If mempool manager supports object blocks (physically and virtual
> contiguous set of objects), it is sufficient to get the first
> object only and the function allows to avoid filling in of
> information about each block member.
> 
> Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>

This can be a good idea. A use case and some performance numbers would
be welcome to demonstrate it :)

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC PATCH 0/6] mempool: add bucket mempool driver
  2017-12-14 13:36 ` [RFC PATCH 0/6] mempool: add bucket mempool driver Olivier MATZ
@ 2018-01-17 15:03   ` Andrew Rybchenko
  0 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-01-17 15:03 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev

Hi Olivier,

first of all many thanks for the review. See my replies/comments below.
Also I'll reply to the the specific patch mails as well.

On 12/14/2017 04:36 PM, Olivier MATZ wrote:
> Hi Andrew,
>
> Please find some comments about this patchset below.
> I'll also send some comments as replies to the specific patch.
>
> On Fri, Nov 24, 2017 at 04:06:25PM +0000, Andrew Rybchenko wrote:
>> The patch series adds bucket mempool driver which allows to allocate
>> (both physically and virtually) contiguous blocks of objects and adds
>> mempool API to do it. It is still capable to provide separate objects,
>> but it is definitely more heavy-weight than ring/stack drivers.
>>
>> The target usecase is dequeue in blocks and enqueue separate objects
>> back (which are collected in buckets to be dequeued). So, the memory
>> pool with bucket driver is created by an application and provided to
>> networking PMD receive queue. The choice of bucket driver is done using
>> rte_eth_dev_pool_ops_supported(). A PMD that relies upon contiguous
>> block allocation should report the bucket driver as the only supported
>> and preferred one.
> So, you are planning to use this driver for a future/existing PMD?

Yes, we're going to use it in the sfc PMD in the case of dedicated FW
variant which utilizes the bucketing.

> Do you have numbers about the performance gain, in which conditions,
> etc... ? And are there conditions where there is a performance loss ?

Our idea here is to use it together HW/FW which understand the bucketing.
It adds some load on CPU to track buckets, but block/bucket dequeue allows
to compensate it. We'll try to prepare performance figures when we have
solution close to final. Hopefully pretty soon.

>> The number of objects in the contiguous block is a function of bucket
>> memory size (.config option) and total element size.
> The size of the bucket memory is hardcoded to 32KB.
> Why this value ?

It is just an example. In fact we test mainly with 64K and 128K.

> Won't that be an issue if the user wants to use larger objects?

Ideally it should be start-time configurable, but it requires a way
to specify driver-specific parameters passed to mempool on allocation.
Right now we decided to keep the task for the future since there is
no clear understanding on how it should look like.
If you have ideas, please, share, we would be thankful.

>> As I understand it breaks ABI so it requires 3 acks in accordance with
>> policy, deprecation notice and mempool shared library version bump.
>> If there is a way to avoid ABI breakage, please, let us know.
> If my understanding is correct, the ABI breakage is caused by the
> addition of the new block dequeue operation, right?

Yes and we'll have more ops to make population of objects customizable.

Thanks,
Andrew.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC PATCH 1/6] mempool: implement abstract mempool info API
  2017-12-14 13:36   ` Olivier MATZ
@ 2018-01-17 15:03     ` Andrew Rybchenko
  0 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-01-17 15:03 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev, Artem V. Andreev

On 12/14/2017 04:36 PM, Olivier MATZ wrote:
> On Fri, Nov 24, 2017 at 04:06:26PM +0000, Andrew Rybchenko wrote:
>> From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>
>>
>> Primarily, it is intended as a way for the mempool driver to provide
>> additional information on how it lays up objects inside the mempool.
>>
>> Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>> ---
>>   lib/librte_mempool/rte_mempool.h     | 31 +++++++++++++++++++++++++++++++
>>   lib/librte_mempool/rte_mempool_ops.c | 15 +++++++++++++++
>>   2 files changed, 46 insertions(+)
>>
>> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
>> index 721227f..3c59d36 100644
>> --- a/lib/librte_mempool/rte_mempool.h
>> +++ b/lib/librte_mempool/rte_mempool.h
>> @@ -217,6 +217,11 @@ struct rte_mempool_memhdr {
>>   	void *opaque;            /**< Argument passed to the free callback */
>>   };
>>   
>> +/*
>> + * Additional information about the mempool
>> + */
>> +struct rte_mempool_info;
>> +
> While there is no compilation issue, I find a bit strange to define this
> API without defining the content of rte_mempool_info.

Agree. Mainly it was an attempt to fit required way to store objects in 
memory
into existing approach. I agree that it is significantly better to solve 
it in
the different way as you suggested. So, the patch will go away.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC PATCH 2/6] mempool: implement clustered object allocation
  2017-12-14 13:37   ` Olivier MATZ
@ 2018-01-17 15:03     ` Andrew Rybchenko
  2018-01-17 15:55       ` santosh
  0 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-01-17 15:03 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev, Artem V. Andreev, Santosh Shukla

On 12/14/2017 04:37 PM, Olivier MATZ wrote:
> On Fri, Nov 24, 2017 at 04:06:27PM +0000, Andrew Rybchenko wrote:
>> From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>
>>
>> Clustered allocation is required to simplify packaging objects into
>> buckets and search of the bucket control structure by an object.
>>
>> Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>> ---
>>   lib/librte_mempool/rte_mempool.c | 39 +++++++++++++++++++++++++++++++++++----
>>   lib/librte_mempool/rte_mempool.h | 23 +++++++++++++++++++++--
>>   test/test/test_mempool.c         |  2 +-
>>   3 files changed, 57 insertions(+), 7 deletions(-)
>>
>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
>> index d50dba4..43455a3 100644
>> --- a/lib/librte_mempool/rte_mempool.c
>> +++ b/lib/librte_mempool/rte_mempool.c
>> @@ -239,7 +239,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
>>    */
>>   size_t
>>   rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
>> -		      unsigned int flags)
>> +		      unsigned int flags,
>> +		      const struct rte_mempool_info *info)
>>   {
>>   	size_t obj_per_page, pg_num, pg_sz;
>>   	unsigned int mask;
>> @@ -252,6 +253,17 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
>>   	if (total_elt_sz == 0)
>>   		return 0;
>>   
>> +	if (flags & MEMPOOL_F_CAPA_ALLOCATE_IN_CLUSTERS) {
>> +		unsigned int align_shift =
>> +			rte_bsf32(
>> +				rte_align32pow2(total_elt_sz *
>> +						info->cluster_size));
>> +		if (pg_shift < align_shift) {
>> +			return ((elt_num / info->cluster_size) + 2)
>> +				<< align_shift;
>> +		}
>> +	}
>> +
> +Cc Santosh for this
>
> To be honnest, that was my fear when introducing
> MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS and MEMPOOL_F_CAPA_PHYS_CONTIG to see more
> and more specific flags in generic code.
>
> I feel that the hidden meaning of these flags is more "if driver == foo",
> which shows that something is wrong is the current design.
>
> We have to think about another way to do. Let me try to propose
> something (to be deepen).
>
> The standard way to create a mempool is:
>
>    mp = create_empty(...)
>    set_ops_by_name(mp, "my-driver")    // optional
>    populate_default(mp)                // or populate_*()
>    obj_iter(mp, callback, arg)         // optional, to init objects
>    // and optional local func to init mempool priv
>
> First, we can consider deprecating some APIs like:
>   - rte_mempool_xmem_create()
>   - rte_mempool_xmem_size()
>   - rte_mempool_xmem_usage()
>   - rte_mempool_populate_iova_tab()
>
> These functions were introduced for xen, which was recently
> removed. They are complex to use, and are not used anywhere else in
> DPDK.
>
> Then, instead of having flags (quite hard to understand without knowing
> the underlying driver), we can let the mempool drivers do the
> populate_default() operation. For that we can add a populate_default
> field in mempool ops. Same for populate_virt(), populate_anon(), and
> populate_phys() which can return -ENOTSUP if this is not
> implemented/implementable on a specific driver, or if flags
> (NO_CACHE_ALIGN, NO_SPREAD, ...) are not supported. If the function
> pointer is NULL, use the generic function.
>
> Thanks to this, the generic code would remain understandable and won't
> have to care about how memory should be allocated for a specific driver.
>
> Thoughts?

Yes, I agree. This week we'll provide updated version of the RFC which
covers it including transition of the mempool/octeontx. I think it is 
sufficient
to introduce two new ops:
  1. To calculate memory space required to store specified number of objects
  2. To populate objects in the provided memory chunk (the op will be called
      from rte_mempool_populate_iova() which is a leaf function for all
      rte_mempool_populate_*() calls.
It will allow to avoid duplication and keep memchunks housekeeping inside
mempool library.

> [...]
>
>> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
>> index 3c59d36..9bcb8b7 100644
>> --- a/lib/librte_mempool/rte_mempool.h
>> +++ b/lib/librte_mempool/rte_mempool.h
>> @@ -220,7 +220,10 @@ struct rte_mempool_memhdr {
>>   /*
>>    * Additional information about the mempool
>>    */
>> -struct rte_mempool_info;
>> +struct rte_mempool_info {
>> +	/** Number of objects in a cluster */
>> +	unsigned int cluster_size;
>> +};
> I think what I'm proposing would also prevent to introduce this
> structure, which is generic but only applies to this driver.

Yes

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC PATCH 3/6] mempool/bucket: implement bucket mempool manager
  2017-12-14 13:38   ` Olivier MATZ
@ 2018-01-17 15:06     ` Andrew Rybchenko
  0 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-01-17 15:06 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev, Artem V. Andreev

On 12/14/2017 04:38 PM, Olivier MATZ wrote:
> On Fri, Nov 24, 2017 at 04:06:28PM +0000, Andrew Rybchenko wrote:
>> From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>
>>
>> The manager provides a way to allocate physically and virtually
>> contiguous set of objects.
>>
>> Note: due to the way objects are organized in the bucket manager,
>> the get_avail_count may return less objects than were enqueued.
>> That breaks the expectation of mempool and mempool_perf tests.
> To me, this can be problematic. The driver should respect the
> API, or it will trigger hard-to-debug issues in applications. Can't
> this be fixed in some way or another?

As I understand there is no requirements on how fast get_count
works. If so, it is doable and we'll fix it in RFCv2.

> [...]
>
>> --- a/config/common_base
>> +++ b/config/common_base
>> @@ -608,6 +608,8 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
>>   #
>>   # Compile Mempool drivers
>>   #
>> +CONFIG_RTE_DRIVER_MEMPOOL_BUCKET=y
>> +CONFIG_RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB=32
>>   CONFIG_RTE_DRIVER_MEMPOOL_RING=y
>>   CONFIG_RTE_DRIVER_MEMPOOL_STACK=y
>>   
> Why 32KB?
> Why not more, or less?
> Can it be a runtime parameter?
> I guess it won't work with too large objects.

We have no good understanding of how driver-specific parameters
should be passed on mempool creation. We've simply kept it for
future since it looks like separate task.
If you have ideas, please, share - we'll be thankful.

> [...]
>
>> +struct bucket_data {
>> +	unsigned int header_size;
>> +	unsigned int chunk_size;
>> +	unsigned int bucket_size;
>> +	uintptr_t bucket_page_mask;
>> +	struct rte_ring *shared_bucket_ring;
>> +	struct bucket_stack *buckets[RTE_MAX_LCORE];
>> +	/*
>> +	 * Multi-producer single-consumer ring to hold objects that are
>> +	 * returned to the mempool at a different lcore than initially
>> +	 * dequeued
>> +	 */
>> +	struct rte_ring *adoption_buffer_rings[RTE_MAX_LCORE];
>> +	struct rte_ring *shared_orphan_ring;
>> +	struct rte_mempool *pool;
>> +
>> +};
> I'm seeing per-core structures. Will it work on non-dataplane cores?
> For instance, if a control thread wants to allocate a mbuf?

May be I don't understand something. Does the control thread has
valid rte_lcore_id()?

> If possible, these fields should be more documented (or just renamed).
> For instance, I suggest chunk_size could be called obj_per_bucket, which
> better described the content of the field.

Thanks, we'll do.

> [...]
>
>> +static int
>> +bucket_enqueue_single(struct bucket_data *data, void *obj)
>> +{
>> +	int rc = 0;
>> +	uintptr_t addr = (uintptr_t)obj;
>> +	struct bucket_header *hdr;
>> +	unsigned int lcore_id = rte_lcore_id();
>> +
>> +	addr &= data->bucket_page_mask;
>> +	hdr = (struct bucket_header *)addr;
>> +
>> +	if (likely(hdr->lcore_id == lcore_id)) {
>> +		if (hdr->fill_cnt < data->bucket_size - 1) {
>> +			hdr->fill_cnt++;
>> +		} else {
>> +			hdr->fill_cnt = 0;
>> +			/* Stack is big enough to put all buckets */
>> +			bucket_stack_push(data->buckets[lcore_id], hdr);
>> +		}
>> +	} else if (hdr->lcore_id != LCORE_ID_ANY) {
>> +		struct rte_ring *adopt_ring =
>> +			data->adoption_buffer_rings[hdr->lcore_id];
>> +
>> +		rc = rte_ring_enqueue(adopt_ring, obj);
>> +		/* Ring is big enough to put all objects */
>> +		RTE_ASSERT(rc == 0);
>> +	} else if (hdr->fill_cnt < data->bucket_size - 1) {
>> +		hdr->fill_cnt++;
>> +	} else {
>> +		hdr->fill_cnt = 0;
>> +		rc = rte_ring_enqueue(data->shared_bucket_ring, hdr);
>> +		/* Ring is big enough to put all buckets */
>> +		RTE_ASSERT(rc == 0);
>> +	}
>> +
>> +	return rc;
>> +}
> [...]
>
>> +static int
>> +bucket_dequeue_buckets(struct bucket_data *data, void **obj_table,
>> +		       unsigned int n_buckets)
>> +{
>> +	struct bucket_stack *cur_stack = data->buckets[rte_lcore_id()];
>> +	unsigned int n_buckets_from_stack = RTE_MIN(n_buckets, cur_stack->top);
>> +	void **obj_table_base = obj_table;
>> +
>> +	n_buckets -= n_buckets_from_stack;
>> +	while (n_buckets_from_stack-- > 0) {
>> +		void *obj = bucket_stack_pop_unsafe(cur_stack);
>> +
>> +		obj_table = bucket_fill_obj_table(data, &obj, obj_table,
>> +						  data->bucket_size);
>> +	}
>> +	while (n_buckets-- > 0) {
>> +		struct bucket_header *hdr;
>> +
>> +		if (unlikely(rte_ring_dequeue(data->shared_bucket_ring,
>> +					      (void **)&hdr) != 0)) {
>> +			/* Return the already-dequeued buffers
>> +			 * back to the mempool
>> +			 */
>> +			bucket_enqueue(data->pool, obj_table_base,
>> +				       obj_table - obj_table_base);
>> +			rte_errno = ENOBUFS;
>> +			return -rte_errno;
>> +		}
>> +		hdr->lcore_id = rte_lcore_id();
>> +		obj_table = bucket_fill_obj_table(data, (void **)&hdr,
>> +						  obj_table, data->bucket_size);
>> +	}
>> +
>> +	return 0;
>> +}
> [...]
>
>> +static int
>> +bucket_dequeue(struct rte_mempool *mp, void **obj_table, unsigned int n)
>> +{
>> +	struct bucket_data *data = mp->pool_data;
>> +	unsigned int n_buckets = n / data->bucket_size;
>> +	unsigned int n_orphans = n - n_buckets * data->bucket_size;
>> +	int rc = 0;
>> +
>> +	bucket_adopt_orphans(data);
>> +
>> +	if (unlikely(n_orphans > 0)) {
>> +		rc = bucket_dequeue_orphans(data, obj_table +
>> +					    (n_buckets * data->bucket_size),
>> +					    n_orphans);
>> +		if (rc != 0)
>> +			return rc;
>> +	}
>> +
>> +	if (likely(n_buckets > 0)) {
>> +		rc = bucket_dequeue_buckets(data, obj_table, n_buckets);
>> +		if (unlikely(rc != 0) && n_orphans > 0) {
>> +			rte_ring_enqueue_bulk(data->shared_orphan_ring,
>> +					      obj_table + (n_buckets *
>> +							   data->bucket_size),
>> +					      n_orphans, NULL);
>> +		}
>> +	}
>> +
>> +	return rc;
>> +}
> If my understanding is correct, at initialization, all full buckets will
> go to the data->shared_bucket_ring ring, with lcore_id == ANY (this is
> done in register_mem).
>
> (note: I feel 'data' is not an ideal name for bucket_data)

Yes, agree. We'll rename it. It is really too generic.

> If the core 0 allocates all the mbufs, and then frees them all, they
> will be stored in the per-core stack, with hdr->lcoreid == 0. Is it
> right?

Right.

> If yes, can core 1 allocate a mbuf after that?

We'll add threshold for per-core stack. If it is exceeded, buckets will be
flushed into shared ring.

>> +static unsigned int
>> +bucket_get_count(const struct rte_mempool *mp)
>> +{
>> +	const struct bucket_data *data = mp->pool_data;
>> +	const struct bucket_stack *local_bucket_stack =
>> +		data->buckets[rte_lcore_id()];
>> +
>> +	return data->bucket_size * local_bucket_stack->top +
>> +		data->bucket_size * rte_ring_count(data->shared_bucket_ring) +
>> +		rte_ring_count(data->shared_orphan_ring);
>> +}
> It looks that get_count only rely on the current core stack usage
> and ignore the other core stacks.

We'll fix it to provide more accurate return value which is required
to pass self-test and make it usable for debugging.

> [...]
>
>> +static int
>> +bucket_register_memory_area(__rte_unused const struct rte_mempool *mp,
>> +			    char *vaddr, __rte_unused phys_addr_t paddr,
>> +			    size_t len)
>> +{
>> +	/* mp->pool_data may be still uninitialized at this point */
>> +	unsigned int chunk_size = mp->header_size + mp->elt_size +
>> +		mp->trailer_size;
>> +	unsigned int bucket_mem_size =
>> +		(BUCKET_MEM_SIZE / chunk_size) * chunk_size;
>> +	unsigned int bucket_page_sz = rte_align32pow2(bucket_mem_size);
>> +	uintptr_t align;
>> +	char *iter;
>> +
>> +	align = RTE_PTR_ALIGN_CEIL(vaddr, bucket_page_sz) - vaddr;
>> +
>> +	for (iter = vaddr + align; iter < vaddr + len; iter += bucket_page_sz) {
>> +		/* librte_mempool uses the header part for its own bookkeeping,
>> +		 * but the librte_mempool's object header is adjacent to the
>> +		 * data; it is small enough and the header is guaranteed to be
>> +		 * at least CACHE_LINE_SIZE (i.e. 64) bytes, so we do have
>> +		 * plenty of space at the start of the header. So the layout
>> +		 * looks like this:
>> +		 * [bucket_header] ... unused ... [rte_mempool_objhdr] [data...]
>> +		 */
> This is not always true.
> If a use creates a mempool with the NO_CACHE_ALIGN, the header will be
> small, without padding.

Thanks. I think it can be handled when bucket mempool implements own
callback to populate objects.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC PATCH 4/6] mempool: add a function to flush default cache
  2017-12-14 13:38   ` Olivier MATZ
@ 2018-01-17 15:07     ` Andrew Rybchenko
  0 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-01-17 15:07 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev, Artem V. Andreev

On 12/14/2017 04:38 PM, Olivier MATZ wrote:
> On Fri, Nov 24, 2017 at 04:06:29PM +0000, Andrew Rybchenko wrote:
>> From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>
>>
>> Mempool get/put API cares about cache itself, but sometimes it is
>> required to flush the cache explicitly.
> I don't disagree, but do you have some use-case in mind?

Ideally mempool objects should be reused ASAP. Block/bucket dequeue
bypasses cache, since cache is not block-aware. So, cache should be
flushed before block dequeue. Initially we had cache flush inside block
dequeue wrapper, but decoupling it gives more freedom for optimizations.

>> Also dedicated API allows to decouple it from block get API (to be
>> added) and provides more fine-grained control.
>>
>> Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>> ---
>>   lib/librte_mempool/rte_mempool.h | 16 ++++++++++++++++
>>   1 file changed, 16 insertions(+)
>>
>> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
>> index 9bcb8b7..3a52b93 100644
>> --- a/lib/librte_mempool/rte_mempool.h
>> +++ b/lib/librte_mempool/rte_mempool.h
>> @@ -1161,6 +1161,22 @@ rte_mempool_default_cache(struct rte_mempool *mp, unsigned lcore_id)
>>   }
>>   
>>   /**
>> + * Ensure that a default per-lcore mempool cache is flushed, if it is present
>> + *
>> + * @param mp
>> + *   A pointer to the mempool structure.
>> + */
>> +static __rte_always_inline void
>> +rte_mempool_ensure_cache_flushed(struct rte_mempool *mp)
>> +{
>> +	struct rte_mempool_cache *cache;
>> +	cache = rte_mempool_default_cache(mp, rte_lcore_id());
>> +	if (cache != NULL && cache->len > 0)
>> +		rte_mempool_cache_flush(cache, mp);
>> +}
>> +
> We already have rte_mempool_cache_flush().
> Why not just extending it instead of adding a new function?
>
> I mean:
>
>      static __rte_always_inline void
>      rte_mempool_cache_flush(struct rte_mempool_cache *cache,
>      			struct rte_mempool *mp)
>      {
>     +	if (cache == NULL)
>     +		cache = rte_mempool_default_cache(mp, rte_lcore_id());
>     +	if (cache == NULL || cache->len == 0)
>     +		return;
>      	rte_mempool_ops_enqueue_bulk(mp, cache->objs, cache->len);
>      	cache->len = 0;
>      }

Thanks, good idea.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC PATCH 2/6] mempool: implement clustered object allocation
  2018-01-17 15:03     ` Andrew Rybchenko
@ 2018-01-17 15:55       ` santosh
  2018-01-17 16:37         ` Andrew Rybchenko
  0 siblings, 1 reply; 197+ messages in thread
From: santosh @ 2018-01-17 15:55 UTC (permalink / raw)
  To: Andrew Rybchenko, Olivier MATZ; +Cc: dev, Artem V. Andreev


On Wednesday 17 January 2018 08:33 PM, Andrew Rybchenko wrote:
> On 12/14/2017 04:37 PM, Olivier MATZ wrote:
>> On Fri, Nov 24, 2017 at 04:06:27PM +0000, Andrew Rybchenko wrote:
>>> From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>
>>>
>>> Clustered allocation is required to simplify packaging objects into
>>> buckets and search of the bucket control structure by an object.
>>>
>>> Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
>>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>>> ---
>>>   lib/librte_mempool/rte_mempool.c | 39 +++++++++++++++++++++++++++++++++++----
>>>   lib/librte_mempool/rte_mempool.h | 23 +++++++++++++++++++++--
>>>   test/test/test_mempool.c         |  2 +-
>>>   3 files changed, 57 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
>>> index d50dba4..43455a3 100644
>>> --- a/lib/librte_mempool/rte_mempool.c
>>> +++ b/lib/librte_mempool/rte_mempool.c
>>> @@ -239,7 +239,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
>>>    */
>>>   size_t
>>>   rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
>>> -              unsigned int flags)
>>> +              unsigned int flags,
>>> +              const struct rte_mempool_info *info)
>>>   {
>>>       size_t obj_per_page, pg_num, pg_sz;
>>>       unsigned int mask;
>>> @@ -252,6 +253,17 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
>>>       if (total_elt_sz == 0)
>>>           return 0;
>>>   +    if (flags & MEMPOOL_F_CAPA_ALLOCATE_IN_CLUSTERS) {
>>> +        unsigned int align_shift =
>>> +            rte_bsf32(
>>> +                rte_align32pow2(total_elt_sz *
>>> +                        info->cluster_size));
>>> +        if (pg_shift < align_shift) {
>>> +            return ((elt_num / info->cluster_size) + 2)
>>> +                << align_shift;
>>> +        }
>>> +    }
>>> +
>> +Cc Santosh for this
>>
>> To be honnest, that was my fear when introducing
>> MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS and MEMPOOL_F_CAPA_PHYS_CONTIG to see more
>> and more specific flags in generic code.
>>
>> I feel that the hidden meaning of these flags is more "if driver == foo",
>> which shows that something is wrong is the current design.
>>
>> We have to think about another way to do. Let me try to propose
>> something (to be deepen).
>>
>> The standard way to create a mempool is:
>>
>>    mp = create_empty(...)
>>    set_ops_by_name(mp, "my-driver")    // optional
>>    populate_default(mp)                // or populate_*()
>>    obj_iter(mp, callback, arg)         // optional, to init objects
>>    // and optional local func to init mempool priv
>>
>> First, we can consider deprecating some APIs like:
>>   - rte_mempool_xmem_create()
>>   - rte_mempool_xmem_size()
>>   - rte_mempool_xmem_usage()
>>   - rte_mempool_populate_iova_tab()
>>
>> These functions were introduced for xen, which was recently
>> removed. They are complex to use, and are not used anywhere else in
>> DPDK.
>>
>> Then, instead of having flags (quite hard to understand without knowing
>> the underlying driver), we can let the mempool drivers do the
>> populate_default() operation. For that we can add a populate_default
>> field in mempool ops. Same for populate_virt(), populate_anon(), and
>> populate_phys() which can return -ENOTSUP if this is not
>> implemented/implementable on a specific driver, or if flags
>> (NO_CACHE_ALIGN, NO_SPREAD, ...) are not supported. If the function
>> pointer is NULL, use the generic function.
>>
>> Thanks to this, the generic code would remain understandable and won't
>> have to care about how memory should be allocated for a specific driver.
>>
>> Thoughts?
>
> Yes, I agree. This week we'll provide updated version of the RFC which
> covers it including transition of the mempool/octeontx. I think it is sufficient
> to introduce two new ops:
>  1. To calculate memory space required to store specified number of objects
>  2. To populate objects in the provided memory chunk (the op will be called
>      from rte_mempool_populate_iova() which is a leaf function for all
>      rte_mempool_populate_*() calls.
> It will allow to avoid duplication and keep memchunks housekeeping inside
> mempool library.
>
There is also a downside of letting mempool driver to populate, which was raised in other thread.
http://dpdk.org/dev/patchwork/patch/31943/

Thanks.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC PATCH 2/6] mempool: implement clustered object allocation
  2018-01-17 15:55       ` santosh
@ 2018-01-17 16:37         ` Andrew Rybchenko
  0 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-01-17 16:37 UTC (permalink / raw)
  To: santosh, Olivier MATZ; +Cc: dev, Artem V. Andreev

On 01/17/2018 06:55 PM, santosh wrote:
> On Wednesday 17 January 2018 08:33 PM, Andrew Rybchenko wrote:
>> On 12/14/2017 04:37 PM, Olivier MATZ wrote:
>>> On Fri, Nov 24, 2017 at 04:06:27PM +0000, Andrew Rybchenko wrote:
>>>> From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>
>>>>
>>>> Clustered allocation is required to simplify packaging objects into
>>>> buckets and search of the bucket control structure by an object.
>>>>
>>>> Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
>>>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>>>> ---
>>>>    lib/librte_mempool/rte_mempool.c | 39 +++++++++++++++++++++++++++++++++++----
>>>>    lib/librte_mempool/rte_mempool.h | 23 +++++++++++++++++++++--
>>>>    test/test/test_mempool.c         |  2 +-
>>>>    3 files changed, 57 insertions(+), 7 deletions(-)
>>>>
>>>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
>>>> index d50dba4..43455a3 100644
>>>> --- a/lib/librte_mempool/rte_mempool.c
>>>> +++ b/lib/librte_mempool/rte_mempool.c
>>>> @@ -239,7 +239,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
>>>>     */
>>>>    size_t
>>>>    rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
>>>> -              unsigned int flags)
>>>> +              unsigned int flags,
>>>> +              const struct rte_mempool_info *info)
>>>>    {
>>>>        size_t obj_per_page, pg_num, pg_sz;
>>>>        unsigned int mask;
>>>> @@ -252,6 +253,17 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
>>>>        if (total_elt_sz == 0)
>>>>            return 0;
>>>>    +    if (flags & MEMPOOL_F_CAPA_ALLOCATE_IN_CLUSTERS) {
>>>> +        unsigned int align_shift =
>>>> +            rte_bsf32(
>>>> +                rte_align32pow2(total_elt_sz *
>>>> +                        info->cluster_size));
>>>> +        if (pg_shift < align_shift) {
>>>> +            return ((elt_num / info->cluster_size) + 2)
>>>> +                << align_shift;
>>>> +        }
>>>> +    }
>>>> +
>>> +Cc Santosh for this
>>>
>>> To be honnest, that was my fear when introducing
>>> MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS and MEMPOOL_F_CAPA_PHYS_CONTIG to see more
>>> and more specific flags in generic code.
>>>
>>> I feel that the hidden meaning of these flags is more "if driver == foo",
>>> which shows that something is wrong is the current design.
>>>
>>> We have to think about another way to do. Let me try to propose
>>> something (to be deepen).
>>>
>>> The standard way to create a mempool is:
>>>
>>>     mp = create_empty(...)
>>>     set_ops_by_name(mp, "my-driver")    // optional
>>>     populate_default(mp)                // or populate_*()
>>>     obj_iter(mp, callback, arg)         // optional, to init objects
>>>     // and optional local func to init mempool priv
>>>
>>> First, we can consider deprecating some APIs like:
>>>    - rte_mempool_xmem_create()
>>>    - rte_mempool_xmem_size()
>>>    - rte_mempool_xmem_usage()
>>>    - rte_mempool_populate_iova_tab()
>>>
>>> These functions were introduced for xen, which was recently
>>> removed. They are complex to use, and are not used anywhere else in
>>> DPDK.
>>>
>>> Then, instead of having flags (quite hard to understand without knowing
>>> the underlying driver), we can let the mempool drivers do the
>>> populate_default() operation. For that we can add a populate_default
>>> field in mempool ops. Same for populate_virt(), populate_anon(), and
>>> populate_phys() which can return -ENOTSUP if this is not
>>> implemented/implementable on a specific driver, or if flags
>>> (NO_CACHE_ALIGN, NO_SPREAD, ...) are not supported. If the function
>>> pointer is NULL, use the generic function.
>>>
>>> Thanks to this, the generic code would remain understandable and won't
>>> have to care about how memory should be allocated for a specific driver.
>>>
>>> Thoughts?
>> Yes, I agree. This week we'll provide updated version of the RFC which
>> covers it including transition of the mempool/octeontx. I think it is sufficient
>> to introduce two new ops:
>>   1. To calculate memory space required to store specified number of objects
>>   2. To populate objects in the provided memory chunk (the op will be called
>>       from rte_mempool_populate_iova() which is a leaf function for all
>>       rte_mempool_populate_*() calls.
>> It will allow to avoid duplication and keep memchunks housekeeping inside
>> mempool library.
>>
> There is also a downside of letting mempool driver to populate, which was raised in other thread.
> http://dpdk.org/dev/patchwork/patch/31943/

I've seen the note about code duplication. Let's discuss it when v2 is sent.
I think our approach minimizes it and allows to have only specific code 
in the
driver callback.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* [RFC v2 00/17] mempool: add bucket mempool driver
  2017-11-24 16:06 [RFC PATCH 0/6] mempool: add bucket mempool driver Andrew Rybchenko
                   ` (6 preceding siblings ...)
  2017-12-14 13:36 ` [RFC PATCH 0/6] mempool: add bucket mempool driver Olivier MATZ
@ 2018-01-23 13:15 ` Andrew Rybchenko
  2018-01-23 13:15   ` [RFC v2 01/17] mempool: fix phys contig check if populate default skipped Andrew Rybchenko
                     ` (21 more replies)
  2018-04-16 13:24 ` [PATCH v4 00/11] mempool: prepare to " Andrew Rybchenko
                   ` (3 subsequent siblings)
  11 siblings, 22 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-01-23 13:15 UTC (permalink / raw)
  To: dev
  Cc: Olivier Matz, Santosh Shukla, Jerin Jacob, Hemant Agrawal,
	Shreyansh Jain

The patch series starts from generic enhancements suggested by Olivier.
Basically it adds driver callbacks to calculate required memory size and
to populate objects using provided memory area. It allows to remove
so-called capability flags used before to tell generic code how to
allocate and slice allocated memory into mempool objects.
Clean up which removes get_capabilities and register_memory_area is
not strictly required, but I think right thing to do.
Existing mempool drivers are updated.

I've kept rte_mempool_populate_iova_tab() intact since it seems to
be not directly related XMEM API functions.

The patch series adds bucket mempool driver which allows to allocate
(both physically and virtually) contiguous blocks of objects and adds
mempool API to do it. It is still capable to provide separate objects,
but it is definitely more heavy-weight than ring/stack drivers.
The driver will be used by the future Solarflare driver enhancements
which allow to utilize physical contiguous blocks in the NIC
hardware/firmware.

The target usecase is dequeue in blocks and enqueue separate objects
back (which are collected in buckets to be dequeued). So, the memory
pool with bucket driver is created by an application and provided to
networking PMD receive queue. The choice of bucket driver is done using
rte_eth_dev_pool_ops_supported(). A PMD that relies upon contiguous
block allocation should report the bucket driver as the only supported
and preferred one.

Introduction of the contiguous block dequeue operation is proven by
performance measurements using autotest with minor enhancements:
 - in the original test bulks are powers of two, which is unacceptable
   for us, so they are changed to multiple of contig_block_size;
 - the test code is duplicated to support plain dequeue and
   dequeue_contig_blocks;
 - all the extra test variations (with/without cache etc) are eliminated;
 - a fake read from the dequeued buffer is added (in both cases) to
   simulate mbufs access.

start performance test for bucket (without cache)
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Srate_persec=   111935488
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Srate_persec=   115290931
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Srate_persec=   353055539
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Srate_persec=   353330790
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Srate_persec=   224657407
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Srate_persec=   230411468
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Srate_persec=   706700902
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Srate_persec=   703673139
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Srate_persec=   425236887
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Srate_persec=   437295512
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Srate_persec=  1343409356
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Srate_persec=  1336567397
start performance test for bucket (without cache + contiguous dequeue)
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Crate_persec=   122945536
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Crate_persec=   126458265
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Crate_persec=   374262988
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Crate_persec=   377316966
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Crate_persec=   244842496
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Crate_persec=   251618917
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Crate_persec=   751226060
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Crate_persec=   756233010
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Crate_persec=   462068120
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Crate_persec=   476997221
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Crate_persec=  1432171313
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Crate_persec=  1438829771

The number of objects in the contiguous block is a function of bucket
memory size (.config option) and total element size. In the future
additional API with possibility to pass parameters on mempool allocation
may be added.

It breaks ABI since changes rte_mempool_ops. Also it removes
rte_mempool_ops_register_memory_area() and
rte_mempool_ops_get_capabilities() since corresponding callbacks are
removed.

The target DPDK release is 18.05.

v2:
  - add driver ops to calculate required memory size and populate
    mempool objects, remove extra flags which were required before
    to control it
  - transition of octeontx and dpaa drivers to the new callbacks
  - change info API to get information from driver required to
    API user to know contiguous block size
  - remove get_capabilities (not required any more and may be
    substituted with more in info get API)
  - remove register_memory_area since it is substituted with
    populate callback which can do more
  - use SPDX tags
  - avoid all objects affinity to single lcore
  - fix bucket get_count
  - deprecate XMEM API
  - avoid introduction of a new function to flush cache
  - fix NO_CACHE_ALIGN case in bucket mempool

Andrew Rybchenko (10):
  mempool: fix phys contig check if populate default skipped
  mempool: add op to calculate memory size to be allocated
  mempool/octeontx: add callback to calculate memory size
  mempool: add op to populate objects using provided memory
  mempool/octeontx: implement callback to populate objects
  mempool: remove callback to get capabilities
  mempool: deprecate xmem functions
  mempool/octeontx: prepare to remove register memory area op
  mempool/dpaa: convert to use populate driver op
  mempool: remove callback to register memory area

Artem V. Andreev (7):
  mempool: ensure the mempool is initialized before populating
  mempool/bucket: implement bucket mempool manager
  mempool: support flushing the default cache of the mempool
  mempool: implement abstract mempool info API
  mempool: support block dequeue operation
  mempool/bucket: implement block dequeue operation
  mempool/bucket: do not allow one lcore to grab all buckets

 MAINTAINERS                                        |   9 +
 config/common_base                                 |   2 +
 drivers/mempool/Makefile                           |   1 +
 drivers/mempool/bucket/Makefile                    |  27 +
 drivers/mempool/bucket/rte_mempool_bucket.c        | 626 +++++++++++++++++++++
 .../mempool/bucket/rte_mempool_bucket_version.map  |   4 +
 drivers/mempool/dpaa/dpaa_mempool.c                |  13 +-
 drivers/mempool/octeontx/rte_mempool_octeontx.c    |  63 ++-
 lib/librte_mempool/rte_mempool.c                   | 192 ++++---
 lib/librte_mempool/rte_mempool.h                   | 366 +++++++++---
 lib/librte_mempool/rte_mempool_ops.c               |  48 +-
 lib/librte_mempool/rte_mempool_version.map         |  11 +-
 mk/rte.app.mk                                      |   1 +
 13 files changed, 1184 insertions(+), 179 deletions(-)
 create mode 100644 drivers/mempool/bucket/Makefile
 create mode 100644 drivers/mempool/bucket/rte_mempool_bucket.c
 create mode 100644 drivers/mempool/bucket/rte_mempool_bucket_version.map

-- 
2.7.4

^ permalink raw reply	[flat|nested] 197+ messages in thread

* [RFC v2 01/17] mempool: fix phys contig check if populate default skipped
  2018-01-23 13:15 ` [RFC v2 00/17] " Andrew Rybchenko
@ 2018-01-23 13:15   ` Andrew Rybchenko
  2018-01-31 16:45     ` Olivier Matz
  2018-02-01 14:02     ` [PATCH] " Andrew Rybchenko
  2018-01-23 13:15   ` [RFC v2 02/17] mempool: add op to calculate memory size to be allocated Andrew Rybchenko
                     ` (20 subsequent siblings)
  21 siblings, 2 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-01-23 13:15 UTC (permalink / raw)
  To: dev; +Cc: Olivier Matz, stable

There is not specified dependency between rte_mempool_populate_default()
and rte_mempool_populate_iova(). So, the second should not rely on the
fact that the first adds capability flags to the mempool flags.

Fixes: 65cf769f5e6a ("mempool: detect physical contiguous objects")
Cc: stable@dpdk.org

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 lib/librte_mempool/rte_mempool.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 6d17022..e783b9a 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -362,6 +362,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	void *opaque)
 {
 	unsigned total_elt_sz;
+	unsigned int mp_cap_flags;
 	unsigned i = 0;
 	size_t off;
 	struct rte_mempool_memhdr *memhdr;
@@ -386,8 +387,14 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 
+	/* Get mempool capabilities */
+	mp_cap_flags = 0;
+	ret = rte_mempool_ops_get_capabilities(mp, &mp_cap_flags);
+	if ((ret < 0) && (ret != -ENOTSUP))
+		return ret;
+
 	/* Detect pool area has sufficient space for elements */
-	if (mp->flags & MEMPOOL_F_CAPA_PHYS_CONTIG) {
+	if (mp_cap_flags & MEMPOOL_F_CAPA_PHYS_CONTIG) {
 		if (len < total_elt_sz * mp->size) {
 			RTE_LOG(ERR, MEMPOOL,
 				"pool area %" PRIx64 " not enough\n",
@@ -407,7 +414,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	memhdr->free_cb = free_cb;
 	memhdr->opaque = opaque;
 
-	if (mp->flags & MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS)
+	if (mp_cap_flags & MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS)
 		/* align object start address to a multiple of total_elt_sz */
 		off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
 	else if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [RFC v2 02/17] mempool: add op to calculate memory size to be allocated
  2018-01-23 13:15 ` [RFC v2 00/17] " Andrew Rybchenko
  2018-01-23 13:15   ` [RFC v2 01/17] mempool: fix phys contig check if populate default skipped Andrew Rybchenko
@ 2018-01-23 13:15   ` Andrew Rybchenko
  2018-01-31 16:45     ` Olivier Matz
  2018-01-23 13:15   ` [RFC v2 03/17] mempool/octeontx: add callback to calculate memory size Andrew Rybchenko
                     ` (19 subsequent siblings)
  21 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-01-23 13:15 UTC (permalink / raw)
  To: dev; +Cc: Olivier Matz

Size of memory chunk required to populate mempool objects depends
on how objects are stored in the memory. Different mempool drivers
may have different requirements and a new operation allows to
calculate memory size in accordance with driver requirements and
advertise requirements on minimum memory chunk size and alignment
in a generic way.

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 lib/librte_mempool/rte_mempool.c           | 95 ++++++++++++++++++++++--------
 lib/librte_mempool/rte_mempool.h           | 63 +++++++++++++++++++-
 lib/librte_mempool/rte_mempool_ops.c       | 18 ++++++
 lib/librte_mempool/rte_mempool_version.map |  8 +++
 4 files changed, 159 insertions(+), 25 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index e783b9a..1f54f95 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -233,13 +233,14 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 	return sz->total_size;
 }
 
-
 /*
- * Calculate maximum amount of memory required to store given number of objects.
+ * Internal function to calculate required memory chunk size shared
+ * by default implementation of the corresponding callback and
+ * deprecated external function.
  */
-size_t
-rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
-		      unsigned int flags)
+static size_t
+rte_mempool_xmem_size_int(uint32_t elt_num, size_t total_elt_sz,
+			  uint32_t pg_shift, unsigned int flags)
 {
 	size_t obj_per_page, pg_num, pg_sz;
 	unsigned int mask;
@@ -264,6 +265,49 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
 	return pg_num << pg_shift;
 }
 
+ssize_t
+rte_mempool_calc_mem_size_def(const struct rte_mempool *mp,
+			      uint32_t obj_num, uint32_t pg_shift,
+			      size_t *min_chunk_size,
+			      __rte_unused size_t *align)
+{
+	unsigned int mp_flags;
+	int ret;
+	size_t total_elt_sz;
+	size_t mem_size;
+
+	/* Get mempool capabilities */
+	mp_flags = 0;
+	ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
+	if ((ret < 0) && (ret != -ENOTSUP))
+		return ret;
+
+	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
+
+	mem_size = rte_mempool_xmem_size_int(obj_num, total_elt_sz, pg_shift,
+					     mp->flags | mp_flags);
+
+	if (mp_flags & MEMPOOL_F_CAPA_PHYS_CONTIG)
+		*min_chunk_size = mem_size;
+	else
+		*min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz);
+
+	/* No extra align requirements by default */
+
+	return mem_size;
+}
+
+/*
+ * Calculate maximum amount of memory required to store given number of objects.
+ */
+size_t
+rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
+		      unsigned int flags)
+{
+	return rte_mempool_xmem_size_int(elt_num, total_elt_sz, pg_shift,
+					 flags);
+}
+
 /*
  * Calculate how much memory would be actually required with the
  * given memory footprint to store required number of elements.
@@ -570,25 +614,16 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	unsigned int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	const struct rte_memzone *mz;
-	size_t size, total_elt_sz, align, pg_sz, pg_shift;
+	ssize_t mem_size;
+	size_t align, pg_sz, pg_shift;
 	rte_iova_t iova;
 	unsigned mz_id, n;
-	unsigned int mp_flags;
 	int ret;
 
 	/* mempool must not be populated */
 	if (mp->nb_mem_chunks != 0)
 		return -EEXIST;
 
-	/* Get mempool capabilities */
-	mp_flags = 0;
-	ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
-	if ((ret < 0) && (ret != -ENOTSUP))
-		return ret;
-
-	/* update mempool capabilities */
-	mp->flags |= mp_flags;
-
 	if (rte_eal_has_hugepages()) {
 		pg_shift = 0; /* not needed, zone is physically contiguous */
 		pg_sz = 0;
@@ -599,10 +634,15 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 		align = pg_sz;
 	}
 
-	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 	for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) {
-		size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift,
-						mp->flags);
+		size_t min_chunk_size;
+
+		mem_size = rte_mempool_ops_calc_mem_size(mp, n, pg_shift,
+				&min_chunk_size, &align);
+		if (mem_size < 0) {
+			ret = mem_size;
+			goto fail;
+		}
 
 		ret = snprintf(mz_name, sizeof(mz_name),
 			RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id);
@@ -611,7 +651,7 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 			goto fail;
 		}
 
-		mz = rte_memzone_reserve_aligned(mz_name, size,
+		mz = rte_memzone_reserve_aligned(mz_name, mem_size,
 			mp->socket_id, mz_flags, align);
 		/* not enough memory, retry with the biggest zone we have */
 		if (mz == NULL)
@@ -622,6 +662,12 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 			goto fail;
 		}
 
+		if (mz->len < min_chunk_size) {
+			rte_memzone_free(mz);
+			ret = -ENOMEM;
+			goto fail;
+		}
+
 		if (mp->flags & MEMPOOL_F_NO_PHYS_CONTIG)
 			iova = RTE_BAD_IOVA;
 		else
@@ -654,13 +700,14 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 static size_t
 get_anon_size(const struct rte_mempool *mp)
 {
-	size_t size, total_elt_sz, pg_sz, pg_shift;
+	size_t size, pg_sz, pg_shift;
+	size_t min_chunk_size;
+	size_t align;
 
 	pg_sz = getpagesize();
 	pg_shift = rte_bsf32(pg_sz);
-	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
-	size = rte_mempool_xmem_size(mp->size, total_elt_sz, pg_shift,
-					mp->flags);
+	size = rte_mempool_ops_calc_mem_size(mp, mp->size, pg_shift,
+					     &min_chunk_size, &align);
 
 	return size;
 }
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index e21026a..be8a371 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -428,6 +428,39 @@ typedef int (*rte_mempool_get_capabilities_t)(const struct rte_mempool *mp,
 typedef int (*rte_mempool_ops_register_memory_area_t)
 (const struct rte_mempool *mp, char *vaddr, rte_iova_t iova, size_t len);
 
+/**
+ * Calculate memory size required to store specified number of objects.
+ *
+ * Note that if object size is bigger then page size, then it assumes
+ * that pages are grouped in subsets of physically continuous pages big
+ * enough to store at least one object.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_num
+ *   Number of objects.
+ * @param pg_shift
+ *   LOG2 of the physical pages size. If set to 0, ignore page boundaries.
+ * @param min_chunk_size
+ *   Location for minimum size of the memory chunk which may be used to
+ *   store memory pool objects.
+ * @param align
+ *   Location with required memory chunk alignment.
+ * @return
+ *   Required memory size aligned at page boundary.
+ */
+typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp,
+		uint32_t obj_num,  uint32_t pg_shift,
+		size_t *min_chunk_size, size_t *align);
+
+/**
+ * Default way to calculate memory size required to store specified
+ * number of objects.
+ */
+ssize_t rte_mempool_calc_mem_size_def(const struct rte_mempool *mp,
+				      uint32_t obj_num, uint32_t pg_shift,
+				      size_t *min_chunk_size, size_t *align);
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -444,6 +477,11 @@ struct rte_mempool_ops {
 	 * Notify new memory area to mempool
 	 */
 	rte_mempool_ops_register_memory_area_t register_memory_area;
+	/**
+	 * Optional callback to calculate memory size required to
+	 * store specified number of objects.
+	 */
+	rte_mempool_calc_mem_size_t calc_mem_size;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -593,6 +631,29 @@ rte_mempool_ops_register_memory_area(const struct rte_mempool *mp,
 				char *vaddr, rte_iova_t iova, size_t len);
 
 /**
+ * @internal wrapper for mempool_ops calc_mem_size callback.
+ * API to calculate size of memory required to store specified number of
+ * object.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_num
+ *   Number of objects.
+ * @param pg_shift
+ *   LOG2 of the physical pages size. If set to 0, ignore page boundaries.
+ * @param min_chunk_size
+ *   Location for minimum size of the memory chunk which may be used to
+ *   store memory pool objects.
+ * @param align
+ *   Location with required memory chunk alignment.
+ * @return
+ *   Required memory size aligned at page boundary.
+ */
+ssize_t rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
+				      uint32_t obj_num, uint32_t pg_shift,
+				      size_t *min_chunk_size, size_t *align);
+
+/**
  * @internal wrapper for mempool_ops free callback.
  *
  * @param mp
@@ -1562,7 +1623,7 @@ uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  * of objects. Assume that the memory buffer will be aligned at page
  * boundary.
  *
- * Note that if object size is bigger then page size, then it assumes
+ * Note that if object size is bigger than page size, then it assumes
  * that pages are grouped in subsets of physically continuous pages big
  * enough to store at least one object.
  *
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 92b9f90..d048b37 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -88,6 +88,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->get_count = h->get_count;
 	ops->get_capabilities = h->get_capabilities;
 	ops->register_memory_area = h->register_memory_area;
+	ops->calc_mem_size = h->calc_mem_size;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -152,6 +153,23 @@ rte_mempool_ops_register_memory_area(const struct rte_mempool *mp, char *vaddr,
 	return ops->register_memory_area(mp, vaddr, iova, len);
 }
 
+/* wrapper to notify new memory area to external mempool */
+ssize_t
+rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
+				uint32_t obj_num, uint32_t pg_shift,
+				size_t *min_chunk_size, size_t *align)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+
+	if (ops->calc_mem_size == NULL)
+		return rte_mempool_calc_mem_size_def(mp, obj_num, pg_shift,
+						     min_chunk_size, align);
+
+	return ops->calc_mem_size(mp, obj_num, pg_shift, min_chunk_size, align);
+}
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 62b76f9..9fa7270 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -51,3 +51,11 @@ DPDK_17.11 {
 	rte_mempool_populate_iova_tab;
 
 } DPDK_16.07;
+
+DPDK_18.05 {
+	global:
+
+	rte_mempool_calc_mem_size_def;
+
+} DPDK_17.11;
+
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [RFC v2 03/17] mempool/octeontx: add callback to calculate memory size
  2018-01-23 13:15 ` [RFC v2 00/17] " Andrew Rybchenko
  2018-01-23 13:15   ` [RFC v2 01/17] mempool: fix phys contig check if populate default skipped Andrew Rybchenko
  2018-01-23 13:15   ` [RFC v2 02/17] mempool: add op to calculate memory size to be allocated Andrew Rybchenko
@ 2018-01-23 13:15   ` Andrew Rybchenko
       [not found]     ` <BN3PR07MB2513732462EB5FE5E1B05713E3FA0@BN3PR07MB2513.namprd07.prod.outlook.com>
  2018-01-23 13:15   ` [RFC v2 04/17] mempool: add op to populate objects using provided memory Andrew Rybchenko
                     ` (18 subsequent siblings)
  21 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-01-23 13:15 UTC (permalink / raw)
  To: dev; +Cc: Olivier Matz, Santosh Shukla, Jerin Jacob

The driver requires one and only one physically contiguous
memory chunk for all objects.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/mempool/octeontx/rte_mempool_octeontx.c | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index d143d05..4ec5efe 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -136,6 +136,30 @@ octeontx_fpavf_get_capabilities(const struct rte_mempool *mp,
 	return 0;
 }
 
+static ssize_t
+octeontx_fpavf_calc_mem_size(const struct rte_mempool *mp,
+			     uint32_t obj_num, uint32_t pg_shift,
+			     size_t *min_chunk_size, size_t *align)
+{
+	ssize_t mem_size;
+
+	/*
+	 * Simply need space for one more object to be able to
+	 * fullfil alignment requirements.
+	 */
+	mem_size = rte_mempool_calc_mem_size_def(mp, obj_num + 1, pg_shift,
+						 min_chunk_size, align);
+	if (mem_size >= 0) {
+		/*
+		 * The whole memory area containing the objects must be
+		 * physically contiguous.
+		 */
+		*min_chunk_size = mem_size;
+	}
+
+	return mem_size;
+}
+
 static int
 octeontx_fpavf_register_memory_area(const struct rte_mempool *mp,
 				    char *vaddr, rte_iova_t paddr, size_t len)
@@ -159,6 +183,7 @@ static struct rte_mempool_ops octeontx_fpavf_ops = {
 	.get_count = octeontx_fpavf_get_count,
 	.get_capabilities = octeontx_fpavf_get_capabilities,
 	.register_memory_area = octeontx_fpavf_register_memory_area,
+	.calc_mem_size = octeontx_fpavf_calc_mem_size,
 };
 
 MEMPOOL_REGISTER_OPS(octeontx_fpavf_ops);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [RFC v2 04/17] mempool: add op to populate objects using provided memory
  2018-01-23 13:15 ` [RFC v2 00/17] " Andrew Rybchenko
                     ` (2 preceding siblings ...)
  2018-01-23 13:15   ` [RFC v2 03/17] mempool/octeontx: add callback to calculate memory size Andrew Rybchenko
@ 2018-01-23 13:15   ` Andrew Rybchenko
  2018-01-31 16:45     ` Olivier Matz
  2018-01-23 13:16   ` [RFC v2 05/17] mempool/octeontx: implement callback to populate objects Andrew Rybchenko
                     ` (17 subsequent siblings)
  21 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-01-23 13:15 UTC (permalink / raw)
  To: dev; +Cc: Olivier Matz

The callback allows to customize how objects are stored in the
memory chunk. Default implementation of the callback which simply
puts objects one by one is available.

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 lib/librte_mempool/rte_mempool.c           | 44 +++++++++++-----
 lib/librte_mempool/rte_mempool.h           | 83 ++++++++++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c       | 18 +++++++
 lib/librte_mempool/rte_mempool_version.map |  1 +
 4 files changed, 133 insertions(+), 13 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 1f54f95..c5003a9 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -145,9 +145,6 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, rte_iova_t iova)
 	tlr = __mempool_get_trailer(obj);
 	tlr->cookie = RTE_MEMPOOL_TRAILER_COOKIE;
 #endif
-
-	/* enqueue in ring */
-	rte_mempool_ops_enqueue_bulk(mp, &obj, 1);
 }
 
 /* call obj_cb() for each mempool element */
@@ -396,6 +393,30 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
 	}
 }
 
+int
+rte_mempool_populate_one_by_one(struct rte_mempool *mp, unsigned int max_objs,
+		void *vaddr, rte_iova_t iova, size_t len,
+		rte_mempool_populate_obj_cb_t *obj_cb)
+{
+	size_t total_elt_sz;
+	size_t off;
+	unsigned int i;
+	void *obj;
+
+	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
+
+	for (off = 0, i = 0; off + total_elt_sz <= len && i < max_objs; i++) {
+		off += mp->header_size;
+		obj = (char *)vaddr + off;
+		obj_cb(mp, obj,
+		       (iova == RTE_BAD_IOVA) ? RTE_BAD_IOVA : (iova + off));
+		rte_mempool_ops_enqueue_bulk(mp, &obj, 1);
+		off += mp->elt_size + mp->trailer_size;
+	}
+
+	return i;
+}
+
 /* Add objects in the pool, using a physically contiguous memory
  * zone. Return the number of objects added, or a negative value
  * on error.
@@ -466,16 +487,13 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	else
 		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
 
-	while (off + total_elt_sz <= len && mp->populated_size < mp->size) {
-		off += mp->header_size;
-		if (iova == RTE_BAD_IOVA)
-			mempool_add_elem(mp, (char *)vaddr + off,
-				RTE_BAD_IOVA);
-		else
-			mempool_add_elem(mp, (char *)vaddr + off, iova + off);
-		off += mp->elt_size + mp->trailer_size;
-		i++;
-	}
+	if (off > len)
+		return -EINVAL;
+
+	i = rte_mempool_ops_populate(mp, mp->size - mp->populated_size,
+		(char *)vaddr + off,
+		(iova == RTE_BAD_IOVA) ? RTE_BAD_IOVA : (iova + off),
+		len - off, mempool_add_elem);
 
 	/* not enough room to store one object */
 	if (i == 0)
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index be8a371..f6ffab9 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -461,6 +461,59 @@ ssize_t rte_mempool_calc_mem_size_def(const struct rte_mempool *mp,
 				      uint32_t obj_num, uint32_t pg_shift,
 				      size_t *min_chunk_size, size_t *align);
 
+/**
+ * Function to be called for each populated object.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param vaddr
+ *   Object virtual address.
+ * @param iova
+ *   Input/output virtual addresss of the object or #RTE_BAD_IOVA.
+ */
+typedef void (rte_mempool_populate_obj_cb_t)(struct rte_mempool *mp,
+		void *vaddr, rte_iova_t iova);
+
+/**
+ * Populate memory pool objects using provided memory chunk.
+ *
+ * Populated objects should be enqueued to the pool, e.g. using
+ * rte_mempool_ops_enqueue_bulk().
+ *
+ * If the given IO address is unknown (iova = RTE_BAD_IOVA),
+ * the chunk doesn't need to be physically contiguous (only virtually),
+ * and allocated objects may span two pages.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param max_objs
+ *   Maximum number of objects to be populated.
+ * @param vaddr
+ *   The virtual address of memory that should be used to store objects.
+ * @param iova
+ *   The IO address
+ * @param len
+ *   The length of memory in bytes.
+ * @param obj_cb
+ *   Callback function to be executed for each populated object.
+ * @return
+ *   The number of objects added on success.
+ *   On error, no objects are populated and a negative errno is returned.
+ */
+typedef int (*rte_mempool_populate_t)(struct rte_mempool *mp,
+		unsigned int max_objs,
+		void *vaddr, rte_iova_t iova, size_t len,
+		rte_mempool_populate_obj_cb_t *obj_cb);
+
+/**
+ * Default way to populate memory pool object using provided memory
+ * chunk: just slice objects one by one.
+ */
+int rte_mempool_populate_one_by_one(struct rte_mempool *mp,
+		unsigned int max_objs,
+		void *vaddr, rte_iova_t iova, size_t len,
+		rte_mempool_populate_obj_cb_t *obj_cb);
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -482,6 +535,11 @@ struct rte_mempool_ops {
 	 * store specified number of objects.
 	 */
 	rte_mempool_calc_mem_size_t calc_mem_size;
+	/**
+	 * Optional callback to populate mempool objects using
+	 * provided memory chunk.
+	 */
+	rte_mempool_populate_t populate;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -654,6 +712,31 @@ ssize_t rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
 				      size_t *min_chunk_size, size_t *align);
 
 /**
+ * @internal wrapper for mempool_ops populate callback.
+ *
+ * Populate memory pool objects using provided memory chunk.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param max_objs
+ *   Maximum number of objects to be populated.
+ * @param vaddr
+ *   The virtual address of memory that should be used to store objects.
+ * @param iova
+ *   The IO address
+ * @param len
+ *   The length of memory in bytes.
+ * @param obj_cb
+ *   Callback function to be executed for each populated object.
+ * @return
+ *   The number of objects added on success.
+ *   On error, no objects are populated and a negative errno is returned.
+ */
+int rte_mempool_ops_populate(struct rte_mempool *mp, unsigned int max_objs,
+			     void *vaddr, rte_iova_t iova, size_t len,
+			     rte_mempool_populate_obj_cb_t *obj_cb);
+
+/**
  * @internal wrapper for mempool_ops free callback.
  *
  * @param mp
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index d048b37..7c4a22b 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -89,6 +89,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->get_capabilities = h->get_capabilities;
 	ops->register_memory_area = h->register_memory_area;
 	ops->calc_mem_size = h->calc_mem_size;
+	ops->populate = h->populate;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -170,6 +171,23 @@ rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
 	return ops->calc_mem_size(mp, obj_num, pg_shift, min_chunk_size, align);
 }
 
+/* wrapper to populate memory pool objects using provided memory chunk */
+int
+rte_mempool_ops_populate(struct rte_mempool *mp, unsigned int max_objs,
+				void *vaddr, rte_iova_t iova, size_t len,
+				rte_mempool_populate_obj_cb_t *obj_cb)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+
+	if (ops->populate == NULL)
+		return rte_mempool_populate_one_by_one(mp, max_objs, vaddr,
+						       iova, len, obj_cb);
+
+	return ops->populate(mp, max_objs, vaddr, iova, len, obj_cb);
+}
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 9fa7270..00288de 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -56,6 +56,7 @@ DPDK_18.05 {
 	global:
 
 	rte_mempool_calc_mem_size_def;
+	rte_mempool_populate_one_by_one;
 
 } DPDK_17.11;
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [RFC v2 05/17] mempool/octeontx: implement callback to populate objects
  2018-01-23 13:15 ` [RFC v2 00/17] " Andrew Rybchenko
                     ` (3 preceding siblings ...)
  2018-01-23 13:15   ` [RFC v2 04/17] mempool: add op to populate objects using provided memory Andrew Rybchenko
@ 2018-01-23 13:16   ` Andrew Rybchenko
  2018-01-23 13:16   ` [RFC v2 06/17] mempool: remove callback to get capabilities Andrew Rybchenko
                     ` (16 subsequent siblings)
  21 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-01-23 13:16 UTC (permalink / raw)
  To: dev; +Cc: Olivier Matz, Santosh Shukla, Jerin Jacob

Custom callback is required to fullfil requirement to align
object virtual address to total object size.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/mempool/octeontx/rte_mempool_octeontx.c | 28 +++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index 4ec5efe..6563e80 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -174,6 +174,33 @@ octeontx_fpavf_register_memory_area(const struct rte_mempool *mp,
 	return octeontx_fpavf_pool_set_range(pool_bar, len, vaddr, gpool);
 }
 
+static int
+octeontx_fpavf_populate(struct rte_mempool *mp, unsigned int max_objs,
+			void *vaddr, rte_iova_t iova, size_t len,
+			rte_mempool_populate_obj_cb_t *obj_cb)
+{
+	size_t total_elt_sz;
+	size_t off;
+
+	if (iova == RTE_BAD_IOVA)
+		return -EINVAL;
+
+	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
+
+	/* align object start address to a multiple of total_elt_sz */
+	off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
+
+	if (len < off)
+		return -EINVAL;
+
+	vaddr = (char *)vaddr + off;
+	iova += off;
+	len -= off;
+
+	return rte_mempool_populate_one_by_one(mp, max_objs, vaddr, iova, len,
+					       obj_cb);
+}
+
 static struct rte_mempool_ops octeontx_fpavf_ops = {
 	.name = "octeontx_fpavf",
 	.alloc = octeontx_fpavf_alloc,
@@ -184,6 +211,7 @@ static struct rte_mempool_ops octeontx_fpavf_ops = {
 	.get_capabilities = octeontx_fpavf_get_capabilities,
 	.register_memory_area = octeontx_fpavf_register_memory_area,
 	.calc_mem_size = octeontx_fpavf_calc_mem_size,
+	.populate = octeontx_fpavf_populate,
 };
 
 MEMPOOL_REGISTER_OPS(octeontx_fpavf_ops);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [RFC v2 06/17] mempool: remove callback to get capabilities
  2018-01-23 13:15 ` [RFC v2 00/17] " Andrew Rybchenko
                     ` (4 preceding siblings ...)
  2018-01-23 13:16   ` [RFC v2 05/17] mempool/octeontx: implement callback to populate objects Andrew Rybchenko
@ 2018-01-23 13:16   ` Andrew Rybchenko
  2018-01-23 13:16   ` [RFC v2 07/17] mempool: deprecate xmem functions Andrew Rybchenko
                     ` (15 subsequent siblings)
  21 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-01-23 13:16 UTC (permalink / raw)
  To: dev; +Cc: Olivier Matz

The callback was introduced to let generic code to know octeontx
mempool driver requirements to use single physically contiguous
memory chunk to store all objects and align object address to
total object size. Now these requirements are met using a new
callbacks to calculate required memory chunk size and to populate
objects using provided memory chunk.

These capability flags are not used anywhere else.

Restricting capabilities to flags is not generic and likely to
be insufficient to describe mempool driver features. If required
in the future, API which returns structured information may be
added.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/mempool/octeontx/rte_mempool_octeontx.c | 11 -----
 lib/librte_mempool/rte_mempool.c                | 56 +++----------------------
 lib/librte_mempool/rte_mempool.h                | 44 -------------------
 lib/librte_mempool/rte_mempool_ops.c            | 14 -------
 lib/librte_mempool/rte_mempool_version.map      |  1 -
 5 files changed, 5 insertions(+), 121 deletions(-)

diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index 6563e80..36cc23b 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -126,16 +126,6 @@ octeontx_fpavf_get_count(const struct rte_mempool *mp)
 	return octeontx_fpa_bufpool_free_count(pool);
 }
 
-static int
-octeontx_fpavf_get_capabilities(const struct rte_mempool *mp,
-				unsigned int *flags)
-{
-	RTE_SET_USED(mp);
-	*flags |= (MEMPOOL_F_CAPA_PHYS_CONTIG |
-			MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS);
-	return 0;
-}
-
 static ssize_t
 octeontx_fpavf_calc_mem_size(const struct rte_mempool *mp,
 			     uint32_t obj_num, uint32_t pg_shift,
@@ -208,7 +198,6 @@ static struct rte_mempool_ops octeontx_fpavf_ops = {
 	.enqueue = octeontx_fpavf_enqueue,
 	.dequeue = octeontx_fpavf_dequeue,
 	.get_count = octeontx_fpavf_get_count,
-	.get_capabilities = octeontx_fpavf_get_capabilities,
 	.register_memory_area = octeontx_fpavf_register_memory_area,
 	.calc_mem_size = octeontx_fpavf_calc_mem_size,
 	.populate = octeontx_fpavf_populate,
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index c5003a9..32b3f94 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -237,15 +237,9 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  */
 static size_t
 rte_mempool_xmem_size_int(uint32_t elt_num, size_t total_elt_sz,
-			  uint32_t pg_shift, unsigned int flags)
+			  uint32_t pg_shift, __rte_unused unsigned int flags)
 {
 	size_t obj_per_page, pg_num, pg_sz;
-	unsigned int mask;
-
-	mask = MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS | MEMPOOL_F_CAPA_PHYS_CONTIG;
-	if ((flags & mask) == mask)
-		/* alignment need one additional object */
-		elt_num += 1;
 
 	if (total_elt_sz == 0)
 		return 0;
@@ -268,26 +262,15 @@ rte_mempool_calc_mem_size_def(const struct rte_mempool *mp,
 			      size_t *min_chunk_size,
 			      __rte_unused size_t *align)
 {
-	unsigned int mp_flags;
-	int ret;
 	size_t total_elt_sz;
 	size_t mem_size;
 
-	/* Get mempool capabilities */
-	mp_flags = 0;
-	ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
-	if ((ret < 0) && (ret != -ENOTSUP))
-		return ret;
-
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 
 	mem_size = rte_mempool_xmem_size_int(obj_num, total_elt_sz, pg_shift,
-					     mp->flags | mp_flags);
+					     mp->flags);
 
-	if (mp_flags & MEMPOOL_F_CAPA_PHYS_CONTIG)
-		*min_chunk_size = mem_size;
-	else
-		*min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz);
+	*min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz);
 
 	/* No extra align requirements by default */
 
@@ -312,18 +295,12 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
 ssize_t
 rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const rte_iova_t iova[], uint32_t pg_num,
-	uint32_t pg_shift, unsigned int flags)
+	uint32_t pg_shift, __rte_unused unsigned int flags)
 {
 	uint32_t elt_cnt = 0;
 	rte_iova_t start, end;
 	uint32_t iova_idx;
 	size_t pg_sz = (size_t)1 << pg_shift;
-	unsigned int mask;
-
-	mask = MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS | MEMPOOL_F_CAPA_PHYS_CONTIG;
-	if ((flags & mask) == mask)
-		/* alignment need one additional object */
-		elt_num += 1;
 
 	/* if iova is NULL, assume contiguous memory */
 	if (iova == NULL) {
@@ -426,8 +403,6 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
 	void *opaque)
 {
-	unsigned total_elt_sz;
-	unsigned int mp_cap_flags;
 	unsigned i = 0;
 	size_t off;
 	struct rte_mempool_memhdr *memhdr;
@@ -450,24 +425,6 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	if (mp->populated_size >= mp->size)
 		return -ENOSPC;
 
-	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
-
-	/* Get mempool capabilities */
-	mp_cap_flags = 0;
-	ret = rte_mempool_ops_get_capabilities(mp, &mp_cap_flags);
-	if ((ret < 0) && (ret != -ENOTSUP))
-		return ret;
-
-	/* Detect pool area has sufficient space for elements */
-	if (mp_cap_flags & MEMPOOL_F_CAPA_PHYS_CONTIG) {
-		if (len < total_elt_sz * mp->size) {
-			RTE_LOG(ERR, MEMPOOL,
-				"pool area %" PRIx64 " not enough\n",
-				(uint64_t)len);
-			return -ENOSPC;
-		}
-	}
-
 	memhdr = rte_zmalloc("MEMPOOL_MEMHDR", sizeof(*memhdr), 0);
 	if (memhdr == NULL)
 		return -ENOMEM;
@@ -479,10 +436,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	memhdr->free_cb = free_cb;
 	memhdr->opaque = opaque;
 
-	if (mp_cap_flags & MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS)
-		/* align object start address to a multiple of total_elt_sz */
-		off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
-	else if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
 		off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
 	else
 		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index f6ffab9..697d618 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -274,24 +274,6 @@ struct rte_mempool {
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
-/**
- * This capability flag is advertised by a mempool handler, if the whole
- * memory area containing the objects must be physically contiguous.
- * Note: This flag should not be passed by application.
- */
-#define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040
-/**
- * This capability flag is advertised by a mempool handler. Used for a case
- * where mempool driver wants object start address(vaddr) aligned to block
- * size(/ total element size).
- *
- * Note:
- * - This flag should not be passed by application.
- *   Flag used for mempool driver only.
- * - Mempool driver must also set MEMPOOL_F_CAPA_PHYS_CONTIG flag along with
- *   MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS.
- */
-#define MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS 0x0080
 
 /**
  * @internal When debug is enabled, store some statistics.
@@ -417,12 +399,6 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
 typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
 
 /**
- * Get the mempool capabilities.
- */
-typedef int (*rte_mempool_get_capabilities_t)(const struct rte_mempool *mp,
-		unsigned int *flags);
-
-/**
  * Notify new memory area to mempool.
  */
 typedef int (*rte_mempool_ops_register_memory_area_t)
@@ -523,10 +499,6 @@ struct rte_mempool_ops {
 	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
 	rte_mempool_get_count get_count; /**< Get qty of available objs. */
 	/**
-	 * Get the mempool capabilities
-	 */
-	rte_mempool_get_capabilities_t get_capabilities;
-	/**
 	 * Notify new memory area to mempool
 	 */
 	rte_mempool_ops_register_memory_area_t register_memory_area;
@@ -652,22 +624,6 @@ unsigned
 rte_mempool_ops_get_count(const struct rte_mempool *mp);
 
 /**
- * @internal wrapper for mempool_ops get_capabilities callback.
- *
- * @param mp [in]
- *   Pointer to the memory pool.
- * @param flags [out]
- *   Pointer to the mempool flags.
- * @return
- *   - 0: Success; The mempool driver has advertised his pool capabilities in
- *   flags param.
- *   - -ENOTSUP - doesn't support get_capabilities ops (valid case).
- *   - Otherwise, pool create fails.
- */
-int
-rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
-					unsigned int *flags);
-/**
  * @internal wrapper for mempool_ops register_memory_area callback.
  * API to notify the mempool handler when a new memory area is added to pool.
  *
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 7c4a22b..5ab643b 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -86,7 +86,6 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->enqueue = h->enqueue;
 	ops->dequeue = h->dequeue;
 	ops->get_count = h->get_count;
-	ops->get_capabilities = h->get_capabilities;
 	ops->register_memory_area = h->register_memory_area;
 	ops->calc_mem_size = h->calc_mem_size;
 	ops->populate = h->populate;
@@ -128,19 +127,6 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp)
 	return ops->get_count(mp);
 }
 
-/* wrapper to get external mempool capabilities. */
-int
-rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
-					unsigned int *flags)
-{
-	struct rte_mempool_ops *ops;
-
-	ops = rte_mempool_get_ops(mp->ops_index);
-
-	RTE_FUNC_PTR_OR_ERR_RET(ops->get_capabilities, -ENOTSUP);
-	return ops->get_capabilities(mp, flags);
-}
-
 /* wrapper to notify new memory area to external mempool */
 int
 rte_mempool_ops_register_memory_area(const struct rte_mempool *mp, char *vaddr,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 00288de..ab30b16 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -45,7 +45,6 @@ DPDK_16.07 {
 DPDK_17.11 {
 	global:
 
-	rte_mempool_ops_get_capabilities;
 	rte_mempool_ops_register_memory_area;
 	rte_mempool_populate_iova;
 	rte_mempool_populate_iova_tab;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [RFC v2 07/17] mempool: deprecate xmem functions
  2018-01-23 13:15 ` [RFC v2 00/17] " Andrew Rybchenko
                     ` (5 preceding siblings ...)
  2018-01-23 13:16   ` [RFC v2 06/17] mempool: remove callback to get capabilities Andrew Rybchenko
@ 2018-01-23 13:16   ` Andrew Rybchenko
  2018-01-23 13:16   ` [RFC v2 08/17] mempool/octeontx: prepare to remove register memory area op Andrew Rybchenko
                     ` (14 subsequent siblings)
  21 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-01-23 13:16 UTC (permalink / raw)
  To: dev; +Cc: Olivier Matz

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 lib/librte_mempool/rte_mempool.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 697d618..e95b1a7 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -916,6 +916,7 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
  *   The pointer to the new allocated mempool, on success. NULL on error
  *   with rte_errno set appropriately. See rte_mempool_create() for details.
  */
+__rte_deprecated
 struct rte_mempool *
 rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 		unsigned cache_size, unsigned private_data_size,
@@ -1678,6 +1679,7 @@ uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  * @return
  *   Required memory size aligned at page boundary.
  */
+__rte_deprecated
 size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
 	uint32_t pg_shift, unsigned int flags);
 
@@ -1709,6 +1711,7 @@ size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
  *   buffer is too small, return a negative value whose absolute value
  *   is the actual number of elements that can be stored in that buffer.
  */
+__rte_deprecated
 ssize_t rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const rte_iova_t iova[], uint32_t pg_num,
 	uint32_t pg_shift, unsigned int flags);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [RFC v2 08/17] mempool/octeontx: prepare to remove register memory area op
  2018-01-23 13:15 ` [RFC v2 00/17] " Andrew Rybchenko
                     ` (6 preceding siblings ...)
  2018-01-23 13:16   ` [RFC v2 07/17] mempool: deprecate xmem functions Andrew Rybchenko
@ 2018-01-23 13:16   ` Andrew Rybchenko
  2018-01-23 13:16   ` [RFC v2 09/17] mempool/dpaa: convert to use populate driver op Andrew Rybchenko
                     ` (13 subsequent siblings)
  21 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-01-23 13:16 UTC (permalink / raw)
  To: dev; +Cc: Olivier Matz, Santosh Shukla, Jerin Jacob

Callback to populate pool objects has all required information and
executed a bit later than register memory area callback.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/mempool/octeontx/rte_mempool_octeontx.c | 25 ++++++++++---------------
 1 file changed, 10 insertions(+), 15 deletions(-)

diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index 36cc23b..8700bfb 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -151,26 +151,15 @@ octeontx_fpavf_calc_mem_size(const struct rte_mempool *mp,
 }
 
 static int
-octeontx_fpavf_register_memory_area(const struct rte_mempool *mp,
-				    char *vaddr, rte_iova_t paddr, size_t len)
-{
-	RTE_SET_USED(paddr);
-	uint8_t gpool;
-	uintptr_t pool_bar;
-
-	gpool = octeontx_fpa_bufpool_gpool(mp->pool_id);
-	pool_bar = mp->pool_id & ~(uint64_t)FPA_GPOOL_MASK;
-
-	return octeontx_fpavf_pool_set_range(pool_bar, len, vaddr, gpool);
-}
-
-static int
 octeontx_fpavf_populate(struct rte_mempool *mp, unsigned int max_objs,
 			void *vaddr, rte_iova_t iova, size_t len,
 			rte_mempool_populate_obj_cb_t *obj_cb)
 {
 	size_t total_elt_sz;
 	size_t off;
+	uint8_t gpool;
+	uintptr_t pool_bar;
+	int ret;
 
 	if (iova == RTE_BAD_IOVA)
 		return -EINVAL;
@@ -187,6 +176,13 @@ octeontx_fpavf_populate(struct rte_mempool *mp, unsigned int max_objs,
 	iova += off;
 	len -= off;
 
+	gpool = octeontx_fpa_bufpool_gpool(mp->pool_id);
+	pool_bar = mp->pool_id & ~(uint64_t)FPA_GPOOL_MASK;
+
+	ret = octeontx_fpavf_pool_set_range(pool_bar, len, vaddr, gpool);
+	if (ret < 0)
+		return ret;
+
 	return rte_mempool_populate_one_by_one(mp, max_objs, vaddr, iova, len,
 					       obj_cb);
 }
@@ -198,7 +194,6 @@ static struct rte_mempool_ops octeontx_fpavf_ops = {
 	.enqueue = octeontx_fpavf_enqueue,
 	.dequeue = octeontx_fpavf_dequeue,
 	.get_count = octeontx_fpavf_get_count,
-	.register_memory_area = octeontx_fpavf_register_memory_area,
 	.calc_mem_size = octeontx_fpavf_calc_mem_size,
 	.populate = octeontx_fpavf_populate,
 };
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [RFC v2 09/17] mempool/dpaa: convert to use populate driver op
  2018-01-23 13:15 ` [RFC v2 00/17] " Andrew Rybchenko
                     ` (7 preceding siblings ...)
  2018-01-23 13:16   ` [RFC v2 08/17] mempool/octeontx: prepare to remove register memory area op Andrew Rybchenko
@ 2018-01-23 13:16   ` Andrew Rybchenko
  2018-01-23 13:16   ` [RFC v2 10/17] mempool: remove callback to register memory area Andrew Rybchenko
                     ` (12 subsequent siblings)
  21 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-01-23 13:16 UTC (permalink / raw)
  To: dev; +Cc: Olivier Matz, Hemant Agrawal, Shreyansh Jain

Populate mempool driver callback is executed a bit later than
register memory area, provides the same information and will
substitute the later since it gives more flexibility and in addition
to notification about memory area allows to customize how mempool
objects are stored in memory.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/mempool/dpaa/dpaa_mempool.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c
index ddc4e47..a179804 100644
--- a/drivers/mempool/dpaa/dpaa_mempool.c
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -260,10 +260,9 @@ dpaa_mbuf_get_count(const struct rte_mempool *mp)
 }
 
 static int
-dpaa_register_memory_area(const struct rte_mempool *mp,
-			  char *vaddr __rte_unused,
-			  rte_iova_t paddr __rte_unused,
-			  size_t len)
+dpaa_populate(const struct rte_mempool *mp, unsigned int max_objs,
+	      char *vaddr, rte_iova_t paddr, size_t len,
+	      rte_mempool_populate_obj_cb_t *obj_cb)
 {
 	struct dpaa_bp_info *bp_info;
 	unsigned int total_elt_sz;
@@ -286,7 +285,9 @@ dpaa_register_memory_area(const struct rte_mempool *mp,
 		/* Else, Memory will be allocated from multiple memzones */
 		bp_info->flags |= DPAA_MPOOL_MULTI_MEMZONE;
 
-	return 0;
+	return rte_mempool_populate_one_by_one(mp, max_objs, vaddr, paddr, len,
+					       obj_cb);
+
 }
 
 struct rte_mempool_ops dpaa_mpool_ops = {
@@ -296,7 +297,7 @@ struct rte_mempool_ops dpaa_mpool_ops = {
 	.enqueue = dpaa_mbuf_free_bulk,
 	.dequeue = dpaa_mbuf_alloc_bulk,
 	.get_count = dpaa_mbuf_get_count,
-	.register_memory_area = dpaa_register_memory_area,
+	.populate = dpaa_populate,
 };
 
 MEMPOOL_REGISTER_OPS(dpaa_mpool_ops);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [RFC v2 10/17] mempool: remove callback to register memory area
  2018-01-23 13:15 ` [RFC v2 00/17] " Andrew Rybchenko
                     ` (8 preceding siblings ...)
  2018-01-23 13:16   ` [RFC v2 09/17] mempool/dpaa: convert to use populate driver op Andrew Rybchenko
@ 2018-01-23 13:16   ` Andrew Rybchenko
  2018-01-23 13:16   ` [RFC v2 11/17] mempool: ensure the mempool is initialized before populating Andrew Rybchenko
                     ` (11 subsequent siblings)
  21 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-01-23 13:16 UTC (permalink / raw)
  To: dev; +Cc: Olivier Matz

The callback is not required any more since there is a new callback
to populate objects using provided memory area which provides
the same information.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 lib/librte_mempool/rte_mempool.c           |  5 -----
 lib/librte_mempool/rte_mempool.h           | 31 ------------------------------
 lib/librte_mempool/rte_mempool_ops.c       | 14 --------------
 lib/librte_mempool/rte_mempool_version.map |  1 -
 4 files changed, 51 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 32b3f94..fc9c95a 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -416,11 +416,6 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 		mp->flags |= MEMPOOL_F_POOL_CREATED;
 	}
 
-	/* Notify memory area to mempool */
-	ret = rte_mempool_ops_register_memory_area(mp, vaddr, iova, len);
-	if (ret != -ENOTSUP && ret < 0)
-		return ret;
-
 	/* mempool is already populated */
 	if (mp->populated_size >= mp->size)
 		return -ENOSPC;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index e95b1a7..6a0039d 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -399,12 +399,6 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
 typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
 
 /**
- * Notify new memory area to mempool.
- */
-typedef int (*rte_mempool_ops_register_memory_area_t)
-(const struct rte_mempool *mp, char *vaddr, rte_iova_t iova, size_t len);
-
-/**
  * Calculate memory size required to store specified number of objects.
  *
  * Note that if object size is bigger then page size, then it assumes
@@ -499,10 +493,6 @@ struct rte_mempool_ops {
 	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
 	rte_mempool_get_count get_count; /**< Get qty of available objs. */
 	/**
-	 * Notify new memory area to mempool
-	 */
-	rte_mempool_ops_register_memory_area_t register_memory_area;
-	/**
 	 * Optional callback to calculate memory size required to
 	 * store specified number of objects.
 	 */
@@ -624,27 +614,6 @@ unsigned
 rte_mempool_ops_get_count(const struct rte_mempool *mp);
 
 /**
- * @internal wrapper for mempool_ops register_memory_area callback.
- * API to notify the mempool handler when a new memory area is added to pool.
- *
- * @param mp
- *   Pointer to the memory pool.
- * @param vaddr
- *   Pointer to the buffer virtual address.
- * @param iova
- *   Pointer to the buffer IO address.
- * @param len
- *   Pool size.
- * @return
- *   - 0: Success;
- *   - -ENOTSUP - doesn't support register_memory_area ops (valid error case).
- *   - Otherwise, rte_mempool_populate_phys fails thus pool create fails.
- */
-int
-rte_mempool_ops_register_memory_area(const struct rte_mempool *mp,
-				char *vaddr, rte_iova_t iova, size_t len);
-
-/**
  * @internal wrapper for mempool_ops calc_mem_size callback.
  * API to calculate size of memory required to store specified number of
  * object.
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 5ab643b..37b0802 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -86,7 +86,6 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->enqueue = h->enqueue;
 	ops->dequeue = h->dequeue;
 	ops->get_count = h->get_count;
-	ops->register_memory_area = h->register_memory_area;
 	ops->calc_mem_size = h->calc_mem_size;
 	ops->populate = h->populate;
 
@@ -128,19 +127,6 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp)
 }
 
 /* wrapper to notify new memory area to external mempool */
-int
-rte_mempool_ops_register_memory_area(const struct rte_mempool *mp, char *vaddr,
-					rte_iova_t iova, size_t len)
-{
-	struct rte_mempool_ops *ops;
-
-	ops = rte_mempool_get_ops(mp->ops_index);
-
-	RTE_FUNC_PTR_OR_ERR_RET(ops->register_memory_area, -ENOTSUP);
-	return ops->register_memory_area(mp, vaddr, iova, len);
-}
-
-/* wrapper to notify new memory area to external mempool */
 ssize_t
 rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
 				uint32_t obj_num, uint32_t pg_shift,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index ab30b16..4f7e2b2 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -45,7 +45,6 @@ DPDK_16.07 {
 DPDK_17.11 {
 	global:
 
-	rte_mempool_ops_register_memory_area;
 	rte_mempool_populate_iova;
 	rte_mempool_populate_iova_tab;
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [RFC v2 11/17] mempool: ensure the mempool is initialized before populating
  2018-01-23 13:15 ` [RFC v2 00/17] " Andrew Rybchenko
                     ` (9 preceding siblings ...)
  2018-01-23 13:16   ` [RFC v2 10/17] mempool: remove callback to register memory area Andrew Rybchenko
@ 2018-01-23 13:16   ` Andrew Rybchenko
  2018-01-31 16:45     ` Olivier Matz
  2018-01-23 13:16   ` [RFC v2 12/17] mempool/bucket: implement bucket mempool manager Andrew Rybchenko
                     ` (10 subsequent siblings)
  21 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-01-23 13:16 UTC (permalink / raw)
  To: dev; +Cc: Olivier Matz, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Callback to calculate required memory area size may require mempool
driver data to be already allocated and initialized.

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 lib/librte_mempool/rte_mempool.c | 29 ++++++++++++++++++++++-------
 1 file changed, 22 insertions(+), 7 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index fc9c95a..cbb4dd5 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -370,6 +370,21 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
 	}
 }
 
+static int
+mempool_maybe_initialize(struct rte_mempool *mp)
+{
+	int ret;
+
+	/* create the internal ring if not already done */
+	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
+		ret = rte_mempool_ops_alloc(mp);
+		if (ret != 0)
+			return ret;
+		mp->flags |= MEMPOOL_F_POOL_CREATED;
+	}
+	return 0;
+}
+
 int
 rte_mempool_populate_one_by_one(struct rte_mempool *mp, unsigned int max_objs,
 		void *vaddr, rte_iova_t iova, size_t len,
@@ -408,13 +423,9 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	struct rte_mempool_memhdr *memhdr;
 	int ret;
 
-	/* create the internal ring if not already done */
-	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
-		ret = rte_mempool_ops_alloc(mp);
-		if (ret != 0)
-			return ret;
-		mp->flags |= MEMPOOL_F_POOL_CREATED;
-	}
+	ret = mempool_maybe_initialize(mp);
+	if (ret != 0)
+		return ret;
 
 	/* mempool is already populated */
 	if (mp->populated_size >= mp->size)
@@ -587,6 +598,10 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	unsigned mz_id, n;
 	int ret;
 
+	ret = mempool_maybe_initialize(mp);
+	if (ret != 0)
+		return ret;
+
 	/* mempool must not be populated */
 	if (mp->nb_mem_chunks != 0)
 		return -EEXIST;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [RFC v2 12/17] mempool/bucket: implement bucket mempool manager
  2018-01-23 13:15 ` [RFC v2 00/17] " Andrew Rybchenko
                     ` (10 preceding siblings ...)
  2018-01-23 13:16   ` [RFC v2 11/17] mempool: ensure the mempool is initialized before populating Andrew Rybchenko
@ 2018-01-23 13:16   ` Andrew Rybchenko
  2018-01-23 13:16   ` [RFC v2 13/17] mempool: support flushing the default cache of the mempool Andrew Rybchenko
                     ` (9 subsequent siblings)
  21 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-01-23 13:16 UTC (permalink / raw)
  To: dev; +Cc: Olivier Matz, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

The manager provides a way to allocate physically and virtually
contiguous set of objects.

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 MAINTAINERS                                        |   9 +
 config/common_base                                 |   2 +
 drivers/mempool/Makefile                           |   1 +
 drivers/mempool/bucket/Makefile                    |  27 +
 drivers/mempool/bucket/rte_mempool_bucket.c        | 561 +++++++++++++++++++++
 .../mempool/bucket/rte_mempool_bucket_version.map  |   4 +
 mk/rte.app.mk                                      |   1 +
 7 files changed, 605 insertions(+)
 create mode 100644 drivers/mempool/bucket/Makefile
 create mode 100644 drivers/mempool/bucket/rte_mempool_bucket.c
 create mode 100644 drivers/mempool/bucket/rte_mempool_bucket_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 5788ea0..9df2cf5 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -303,6 +303,15 @@ F: test/test/test_event_eth_rx_adapter.c
 F: doc/guides/prog_guide/event_ethernet_rx_adapter.rst
 
 
+Memory Pool Drivers
+-------------------
+
+Bucket memory pool
+M: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
+M: Andrew Rybchenko <arybchenko@solarflare.com>
+F: drivers/mempool/bucket/
+
+
 Bus Drivers
 -----------
 
diff --git a/config/common_base b/config/common_base
index 170a389..4fe42f6 100644
--- a/config/common_base
+++ b/config/common_base
@@ -622,6 +622,8 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
 #
 # Compile Mempool drivers
 #
+CONFIG_RTE_DRIVER_MEMPOOL_BUCKET=y
+CONFIG_RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB=64
 CONFIG_RTE_DRIVER_MEMPOOL_RING=y
 CONFIG_RTE_DRIVER_MEMPOOL_STACK=y
 
diff --git a/drivers/mempool/Makefile b/drivers/mempool/Makefile
index aae2cb1..45fca04 100644
--- a/drivers/mempool/Makefile
+++ b/drivers/mempool/Makefile
@@ -3,6 +3,7 @@
 
 include $(RTE_SDK)/mk/rte.vars.mk
 
+DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_BUCKET) += bucket
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL) += dpaa
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL) += dpaa2
 DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += ring
diff --git a/drivers/mempool/bucket/Makefile b/drivers/mempool/bucket/Makefile
new file mode 100644
index 0000000..7364916
--- /dev/null
+++ b/drivers/mempool/bucket/Makefile
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: BSD-3-Clause
+#
+# Copyright (c) 2017-2018 Solarflare Communications Inc.
+# All rights reserved.
+#
+# This software was jointly developed between OKTET Labs (under contract
+# for Solarflare) and Solarflare Communications, Inc.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_mempool_bucket.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+LDLIBS += -lrte_eal -lrte_mempool -lrte_ring
+
+EXPORT_MAP := rte_mempool_bucket_version.map
+
+LIBABIVER := 1
+
+SRCS-$(CONFIG_RTE_DRIVER_MEMPOOL_BUCKET) += rte_mempool_bucket.c
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c
new file mode 100644
index 0000000..dc4e1dc
--- /dev/null
+++ b/drivers/mempool/bucket/rte_mempool_bucket.c
@@ -0,0 +1,561 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2017-2018 Solarflare Communications Inc.
+ * All rights reserved.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+#include <stdbool.h>
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+
+/*
+ * The general idea of the bucket mempool driver is as follows.
+ * We keep track of physically contiguous groups (buckets) of objects
+ * of a certain size. Every such a group has a counter that is
+ * incremented every time an object from that group is enqueued.
+ * Until the bucket is full, no objects from it are eligible for allocation.
+ * If a request is made to dequeue a multiply of bucket size, it is
+ * satisfied by returning the whole buckets, instead of separate objects.
+ */
+
+
+struct bucket_header {
+	unsigned int lcore_id;
+	uint8_t fill_cnt;
+};
+
+struct bucket_stack {
+	unsigned int top;
+	unsigned int limit;
+	void *objects[];
+};
+
+struct bucket_data {
+	unsigned int header_size;
+	unsigned int total_elt_size;
+	unsigned int obj_per_bucket;
+	uintptr_t bucket_page_mask;
+	struct rte_ring *shared_bucket_ring;
+	struct bucket_stack *buckets[RTE_MAX_LCORE];
+	/*
+	 * Multi-producer single-consumer ring to hold objects that are
+	 * returned to the mempool at a different lcore than initially
+	 * dequeued
+	 */
+	struct rte_ring *adoption_buffer_rings[RTE_MAX_LCORE];
+	struct rte_ring *shared_orphan_ring;
+	struct rte_mempool *pool;
+	unsigned int bucket_mem_size;
+};
+
+static struct bucket_stack *
+bucket_stack_create(const struct rte_mempool *mp, unsigned int n_elts)
+{
+	struct bucket_stack *stack;
+
+	stack = rte_zmalloc_socket("bucket_stack",
+				   sizeof(struct bucket_stack) +
+				   n_elts * sizeof(void *),
+				   RTE_CACHE_LINE_SIZE,
+				   mp->socket_id);
+	if (stack == NULL)
+		return NULL;
+	stack->limit = n_elts;
+	stack->top = 0;
+
+	return stack;
+}
+
+static void
+bucket_stack_push(struct bucket_stack *stack, void *obj)
+{
+	RTE_ASSERT(stack->top < stack->limit);
+	stack->objects[stack->top++] = obj;
+}
+
+static void *
+bucket_stack_pop_unsafe(struct bucket_stack *stack)
+{
+	RTE_ASSERT(stack->top > 0);
+	return stack->objects[--stack->top];
+}
+
+static void *
+bucket_stack_pop(struct bucket_stack *stack)
+{
+	if (stack->top == 0)
+		return NULL;
+	return bucket_stack_pop_unsafe(stack);
+}
+
+static int
+bucket_enqueue_single(struct bucket_data *bd, void *obj)
+{
+	int rc = 0;
+	uintptr_t addr = (uintptr_t)obj;
+	struct bucket_header *hdr;
+	unsigned int lcore_id = rte_lcore_id();
+
+	addr &= bd->bucket_page_mask;
+	hdr = (struct bucket_header *)addr;
+
+	if (likely(hdr->lcore_id == lcore_id)) {
+		if (hdr->fill_cnt < bd->obj_per_bucket - 1) {
+			hdr->fill_cnt++;
+		} else {
+			hdr->fill_cnt = 0;
+			/* Stack is big enough to put all buckets */
+			bucket_stack_push(bd->buckets[lcore_id], hdr);
+		}
+	} else if (hdr->lcore_id != LCORE_ID_ANY) {
+		struct rte_ring *adopt_ring =
+			bd->adoption_buffer_rings[hdr->lcore_id];
+
+		rc = rte_ring_enqueue(adopt_ring, obj);
+		/* Ring is big enough to put all objects */
+		RTE_ASSERT(rc == 0);
+	} else if (hdr->fill_cnt < bd->obj_per_bucket - 1) {
+		hdr->fill_cnt++;
+	} else {
+		hdr->fill_cnt = 0;
+		rc = rte_ring_enqueue(bd->shared_bucket_ring, hdr);
+		/* Ring is big enough to put all buckets */
+		RTE_ASSERT(rc == 0);
+	}
+
+	return rc;
+}
+
+static int
+bucket_enqueue(struct rte_mempool *mp, void * const *obj_table,
+	       unsigned int n)
+{
+	struct bucket_data *bd = mp->pool_data;
+	unsigned int i;
+	int rc = 0;
+
+	for (i = 0; i < n; i++) {
+		rc = bucket_enqueue_single(bd, obj_table[i]);
+		RTE_ASSERT(rc == 0);
+	}
+	return rc;
+}
+
+static void **
+bucket_fill_obj_table(const struct bucket_data *bd, void **pstart,
+		      void **obj_table, unsigned int n)
+{
+	unsigned int i;
+	uint8_t *objptr = *pstart;
+
+	for (objptr += bd->header_size, i = 0; i < n;
+	     i++, objptr += bd->total_elt_size)
+		*obj_table++ = objptr;
+	*pstart = objptr;
+	return obj_table;
+}
+
+static int
+bucket_dequeue_orphans(struct bucket_data *bd, void **obj_table,
+		       unsigned int n_orphans)
+{
+	unsigned int i;
+	int rc;
+	uint8_t *objptr;
+
+	rc = rte_ring_dequeue_bulk(bd->shared_orphan_ring, obj_table,
+				   n_orphans, NULL);
+	if (unlikely(rc != (int)n_orphans)) {
+		struct bucket_header *hdr;
+
+		objptr = bucket_stack_pop(bd->buckets[rte_lcore_id()]);
+		hdr = (struct bucket_header *)objptr;
+
+		if (objptr == NULL) {
+			rc = rte_ring_dequeue(bd->shared_bucket_ring,
+					      (void **)&objptr);
+			if (rc != 0) {
+				rte_errno = ENOBUFS;
+				return -rte_errno;
+			}
+			hdr = (struct bucket_header *)objptr;
+			hdr->lcore_id = rte_lcore_id();
+		}
+		hdr->fill_cnt = 0;
+		bucket_fill_obj_table(bd, (void **)&objptr, obj_table,
+				      n_orphans);
+		for (i = n_orphans; i < bd->obj_per_bucket; i++,
+			     objptr += bd->total_elt_size) {
+			rc = rte_ring_enqueue(bd->shared_orphan_ring,
+					      objptr);
+			if (rc != 0) {
+				RTE_ASSERT(0);
+				rte_errno = -rc;
+				return rc;
+			}
+		}
+	}
+
+	return 0;
+}
+
+static int
+bucket_dequeue_buckets(struct bucket_data *bd, void **obj_table,
+		       unsigned int n_buckets)
+{
+	struct bucket_stack *cur_stack = bd->buckets[rte_lcore_id()];
+	unsigned int n_buckets_from_stack = RTE_MIN(n_buckets, cur_stack->top);
+	void **obj_table_base = obj_table;
+
+	n_buckets -= n_buckets_from_stack;
+	while (n_buckets_from_stack-- > 0) {
+		void *obj = bucket_stack_pop_unsafe(cur_stack);
+
+		obj_table = bucket_fill_obj_table(bd, &obj, obj_table,
+						  bd->obj_per_bucket);
+	}
+	while (n_buckets-- > 0) {
+		struct bucket_header *hdr;
+
+		if (unlikely(rte_ring_dequeue(bd->shared_bucket_ring,
+					      (void **)&hdr) != 0)) {
+			/*
+			 * Return the already-dequeued buffers
+			 * back to the mempool
+			 */
+			bucket_enqueue(bd->pool, obj_table_base,
+				       obj_table - obj_table_base);
+			rte_errno = ENOBUFS;
+			return -rte_errno;
+		}
+		hdr->lcore_id = rte_lcore_id();
+		obj_table = bucket_fill_obj_table(bd, (void **)&hdr,
+						  obj_table,
+						  bd->obj_per_bucket);
+	}
+
+	return 0;
+}
+
+static int
+bucket_adopt_orphans(struct bucket_data *bd)
+{
+	int rc = 0;
+	struct rte_ring *adopt_ring =
+		bd->adoption_buffer_rings[rte_lcore_id()];
+
+	if (unlikely(!rte_ring_empty(adopt_ring))) {
+		void *orphan;
+
+		while (rte_ring_sc_dequeue(adopt_ring, &orphan) == 0) {
+			rc = bucket_enqueue_single(bd, orphan);
+			RTE_ASSERT(rc == 0);
+		}
+	}
+	return rc;
+}
+
+static int
+bucket_dequeue(struct rte_mempool *mp, void **obj_table, unsigned int n)
+{
+	struct bucket_data *bd = mp->pool_data;
+	unsigned int n_buckets = n / bd->obj_per_bucket;
+	unsigned int n_orphans = n - n_buckets * bd->obj_per_bucket;
+	int rc = 0;
+
+	bucket_adopt_orphans(bd);
+
+	if (unlikely(n_orphans > 0)) {
+		rc = bucket_dequeue_orphans(bd, obj_table +
+					    (n_buckets * bd->obj_per_bucket),
+					    n_orphans);
+		if (rc != 0)
+			return rc;
+	}
+
+	if (likely(n_buckets > 0)) {
+		rc = bucket_dequeue_buckets(bd, obj_table, n_buckets);
+		if (unlikely(rc != 0) && n_orphans > 0) {
+			rte_ring_enqueue_bulk(bd->shared_orphan_ring,
+					      obj_table + (n_buckets *
+							   bd->obj_per_bucket),
+					      n_orphans, NULL);
+		}
+	}
+
+	return rc;
+}
+
+static void
+count_underfilled_buckets(struct rte_mempool *mp,
+			  void *opaque,
+			  struct rte_mempool_memhdr *memhdr,
+			  __rte_unused unsigned int mem_idx)
+{
+	unsigned int *pcount = opaque;
+	const struct bucket_data *bd = mp->pool_data;
+	unsigned int bucket_page_sz =
+		(unsigned int)(~bd->bucket_page_mask + 1);
+	uintptr_t align;
+	uint8_t *iter;
+
+	align = (uintptr_t)RTE_PTR_ALIGN_CEIL(memhdr->addr, bucket_page_sz) -
+		(uintptr_t)memhdr->addr;
+
+	for (iter = (uint8_t *)memhdr->addr + align;
+	     iter < (uint8_t *)memhdr->addr + memhdr->len;
+	     iter += bucket_page_sz) {
+		struct bucket_header *hdr = (struct bucket_header *)iter;
+
+		*pcount += hdr->fill_cnt;
+	}
+}
+
+static unsigned int
+bucket_get_count(const struct rte_mempool *mp)
+{
+	const struct bucket_data *bd = mp->pool_data;
+	unsigned int count =
+		bd->obj_per_bucket * rte_ring_count(bd->shared_bucket_ring) +
+		rte_ring_count(bd->shared_orphan_ring);
+	unsigned int i;
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		if (!rte_lcore_is_enabled(i))
+			continue;
+		count += bd->obj_per_bucket * bd->buckets[i]->top;
+	}
+
+	rte_mempool_mem_iter((struct rte_mempool *)(uintptr_t)mp,
+			     count_underfilled_buckets, &count);
+
+	return count;
+}
+
+static int
+bucket_alloc(struct rte_mempool *mp)
+{
+	int rg_flags = 0;
+	int rc = 0;
+	char rg_name[RTE_RING_NAMESIZE];
+	struct bucket_data *bd;
+	unsigned int i;
+	unsigned int bucket_header_size;
+
+	bd = rte_zmalloc_socket("bucket_pool", sizeof(*bd),
+				RTE_CACHE_LINE_SIZE, mp->socket_id);
+	if (bd == NULL) {
+		rc = -ENOMEM;
+		goto no_mem_for_data;
+	}
+	bd->pool = mp;
+	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+		bucket_header_size = sizeof(struct bucket_header);
+	else
+		bucket_header_size = RTE_CACHE_LINE_SIZE;
+	RTE_BUILD_BUG_ON(sizeof(struct bucket_header) > RTE_CACHE_LINE_SIZE);
+	bd->header_size = mp->header_size + bucket_header_size;
+	bd->total_elt_size = mp->header_size + mp->elt_size + mp->trailer_size;
+	bd->bucket_mem_size = RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB * 1024;
+	bd->obj_per_bucket = (bd->bucket_mem_size - bucket_header_size) /
+		bd->total_elt_size;
+	bd->bucket_page_mask = ~(rte_align64pow2(bd->bucket_mem_size) - 1);
+
+	if (mp->flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (mp->flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		if (!rte_lcore_is_enabled(i))
+			continue;
+		bd->buckets[i] =
+			bucket_stack_create(mp, mp->size / bd->obj_per_bucket);
+		if (bd->buckets[i] == NULL) {
+			rc = -ENOMEM;
+			goto no_mem_for_stacks;
+		}
+		rc = snprintf(rg_name, sizeof(rg_name),
+			      RTE_MEMPOOL_MZ_FORMAT ".a%u", mp->name, i);
+		if (rc < 0 || rc >= (int)sizeof(rg_name)) {
+			rc = -ENAMETOOLONG;
+			goto no_mem_for_stacks;
+		}
+		bd->adoption_buffer_rings[i] =
+			rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+					mp->socket_id,
+					rg_flags | RING_F_SC_DEQ);
+		if (bd->adoption_buffer_rings[i] == NULL) {
+			rc = -rte_errno;
+			goto no_mem_for_stacks;
+		}
+	}
+
+	rc = snprintf(rg_name, sizeof(rg_name),
+		      RTE_MEMPOOL_MZ_FORMAT ".0", mp->name);
+	if (rc < 0 || rc >= (int)sizeof(rg_name)) {
+		rc = -ENAMETOOLONG;
+		goto invalid_shared_orphan_ring;
+	}
+	bd->shared_orphan_ring =
+		rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+				mp->socket_id, rg_flags);
+	if (bd->shared_orphan_ring == NULL) {
+		rc = -rte_errno;
+		goto cannot_create_shared_orphan_ring;
+	}
+
+	rc = snprintf(rg_name, sizeof(rg_name),
+		       RTE_MEMPOOL_MZ_FORMAT ".1", mp->name);
+	if (rc < 0 || rc >= (int)sizeof(rg_name)) {
+		rc = -ENAMETOOLONG;
+		goto invalid_shared_bucket_ring;
+	}
+	bd->shared_bucket_ring =
+		rte_ring_create(rg_name,
+				rte_align32pow2((mp->size + 1) /
+						bd->obj_per_bucket),
+				mp->socket_id, rg_flags);
+	if (bd->shared_bucket_ring == NULL) {
+		rc = -rte_errno;
+		goto cannot_create_shared_bucket_ring;
+	}
+
+	mp->pool_data = bd;
+
+	return 0;
+
+cannot_create_shared_bucket_ring:
+invalid_shared_bucket_ring:
+	rte_ring_free(bd->shared_orphan_ring);
+cannot_create_shared_orphan_ring:
+invalid_shared_orphan_ring:
+no_mem_for_stacks:
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		rte_free(bd->buckets[i]);
+		rte_ring_free(bd->adoption_buffer_rings[i]);
+	}
+	rte_free(bd);
+no_mem_for_data:
+	rte_errno = -rc;
+	return rc;
+}
+
+static void
+bucket_free(struct rte_mempool *mp)
+{
+	unsigned int i;
+	struct bucket_data *bd = mp->pool_data;
+
+	if (bd == NULL)
+		return;
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		rte_free(bd->buckets[i]);
+		rte_ring_free(bd->adoption_buffer_rings[i]);
+	}
+
+	rte_ring_free(bd->shared_orphan_ring);
+	rte_ring_free(bd->shared_bucket_ring);
+
+	rte_free(bd);
+}
+
+static ssize_t
+bucket_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num,
+		     __rte_unused uint32_t pg_shift, size_t *min_total_elt_size,
+		     size_t *align)
+{
+	struct bucket_data *bd = mp->pool_data;
+	unsigned int bucket_page_sz;
+
+	if (bd == NULL)
+		return -EINVAL;
+
+	bucket_page_sz = rte_align32pow2(bd->bucket_mem_size);
+	*align = bucket_page_sz;
+	*min_total_elt_size = bucket_page_sz;
+	/*
+	 * Each bucket occupies its own block aligned to
+	 * bucket_page_sz, so the required amount of memory is
+	 * a multiple of bucket_page_sz.
+	 * We also need extra space for a bucket header
+	 */
+	return ((obj_num + bd->obj_per_bucket - 1) /
+		bd->obj_per_bucket) * bucket_page_sz;
+}
+
+static int
+bucket_populate(struct rte_mempool *mp, unsigned int max_objs,
+		void *vaddr, rte_iova_t iova, size_t len,
+		rte_mempool_populate_obj_cb_t *obj_cb)
+{
+	struct bucket_data *bd = mp->pool_data;
+	unsigned int bucket_page_sz;
+	unsigned int bucket_header_sz;
+	unsigned int n_objs;
+	uintptr_t align;
+	uint8_t *iter;
+	int rc;
+
+	if (bd == NULL)
+		return -EINVAL;
+
+	bucket_page_sz = rte_align32pow2(bd->bucket_mem_size);
+	align = RTE_PTR_ALIGN_CEIL((uintptr_t)vaddr, bucket_page_sz) -
+		(uintptr_t)vaddr;
+
+	bucket_header_sz = bd->header_size - mp->header_size;
+	if (iova != RTE_BAD_IOVA)
+		iova += align + bucket_header_sz;
+
+	for (iter = (uint8_t *)vaddr + align, n_objs = 0;
+	     iter < (uint8_t *)vaddr + len && n_objs < max_objs;
+	     iter += bucket_page_sz) {
+		struct bucket_header *hdr = (struct bucket_header *)iter;
+		unsigned int chunk_len = bd->bucket_mem_size;
+
+		if ((size_t)(iter - (uint8_t *)vaddr) + chunk_len > len)
+			chunk_len = len - (iter - (uint8_t *)vaddr);
+		if (chunk_len <= bucket_header_sz)
+			break;
+		chunk_len -= bucket_header_sz;
+
+		hdr->fill_cnt = 0;
+		hdr->lcore_id = LCORE_ID_ANY;
+		rc = rte_mempool_populate_one_by_one(mp,
+						     RTE_MIN(bd->obj_per_bucket,
+							     max_objs - n_objs),
+						     iter + bucket_header_sz,
+						     iova, chunk_len, obj_cb);
+		if (rc < 0)
+			return rc;
+		n_objs += rc;
+		if (iova != RTE_BAD_IOVA)
+			iova += bucket_page_sz;
+	}
+
+	return n_objs;
+}
+
+static const struct rte_mempool_ops ops_bucket = {
+	.name = "bucket",
+	.alloc = bucket_alloc,
+	.free = bucket_free,
+	.enqueue = bucket_enqueue,
+	.dequeue = bucket_dequeue,
+	.get_count = bucket_get_count,
+	.calc_mem_size = bucket_calc_mem_size,
+	.populate = bucket_populate,
+};
+
+
+MEMPOOL_REGISTER_OPS(ops_bucket);
diff --git a/drivers/mempool/bucket/rte_mempool_bucket_version.map b/drivers/mempool/bucket/rte_mempool_bucket_version.map
new file mode 100644
index 0000000..9b9ab1a
--- /dev/null
+++ b/drivers/mempool/bucket/rte_mempool_bucket_version.map
@@ -0,0 +1,4 @@
+DPDK_18.05 {
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 0169f3f..405785d 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -116,6 +116,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_VDEV_BUS)       += -lrte_bus_vdev
 ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),n)
 # plugins (link only if static libraries)
 
+_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_BUCKET) += -lrte_mempool_bucket
 _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_STACK)  += -lrte_mempool_stack
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET)  += -lrte_pmd_af_packet
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [RFC v2 13/17] mempool: support flushing the default cache of the mempool
  2018-01-23 13:15 ` [RFC v2 00/17] " Andrew Rybchenko
                     ` (11 preceding siblings ...)
  2018-01-23 13:16   ` [RFC v2 12/17] mempool/bucket: implement bucket mempool manager Andrew Rybchenko
@ 2018-01-23 13:16   ` Andrew Rybchenko
  2018-01-23 13:16   ` [RFC v2 14/17] mempool: implement abstract mempool info API Andrew Rybchenko
                     ` (8 subsequent siblings)
  21 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-01-23 13:16 UTC (permalink / raw)
  To: dev; +Cc: Olivier Matz, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Mempool get/put API cares about cache itself, but sometimes it is
required to flush the cache explicitly.

The function is moved in the file since it now requires
rte_mempool_default_cache().

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 lib/librte_mempool/rte_mempool.h | 36 ++++++++++++++++++++----------------
 1 file changed, 20 insertions(+), 16 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 6a0039d..16d95ae 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -1148,22 +1148,6 @@ void
 rte_mempool_cache_free(struct rte_mempool_cache *cache);
 
 /**
- * Flush a user-owned mempool cache to the specified mempool.
- *
- * @param cache
- *   A pointer to the mempool cache.
- * @param mp
- *   A pointer to the mempool.
- */
-static __rte_always_inline void
-rte_mempool_cache_flush(struct rte_mempool_cache *cache,
-			struct rte_mempool *mp)
-{
-	rte_mempool_ops_enqueue_bulk(mp, cache->objs, cache->len);
-	cache->len = 0;
-}
-
-/**
  * Get a pointer to the per-lcore default mempool cache.
  *
  * @param mp
@@ -1186,6 +1170,26 @@ rte_mempool_default_cache(struct rte_mempool *mp, unsigned lcore_id)
 }
 
 /**
+ * Flush a user-owned mempool cache to the specified mempool.
+ *
+ * @param cache
+ *   A pointer to the mempool cache.
+ * @param mp
+ *   A pointer to the mempool.
+ */
+static __rte_always_inline void
+rte_mempool_cache_flush(struct rte_mempool_cache *cache,
+			struct rte_mempool *mp)
+{
+	if (cache == NULL)
+		cache = rte_mempool_default_cache(mp, rte_lcore_id());
+	if (cache == NULL || cache->len == 0)
+		return;
+	rte_mempool_ops_enqueue_bulk(mp, cache->objs, cache->len);
+	cache->len = 0;
+}
+
+/**
  * @internal Put several objects back in the mempool; used internally.
  * @param mp
  *   A pointer to the mempool structure.
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [RFC v2 14/17] mempool: implement abstract mempool info API
  2018-01-23 13:15 ` [RFC v2 00/17] " Andrew Rybchenko
                     ` (12 preceding siblings ...)
  2018-01-23 13:16   ` [RFC v2 13/17] mempool: support flushing the default cache of the mempool Andrew Rybchenko
@ 2018-01-23 13:16   ` Andrew Rybchenko
  2018-01-23 13:16   ` [RFC v2 15/17] mempool: support block dequeue operation Andrew Rybchenko
                     ` (7 subsequent siblings)
  21 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-01-23 13:16 UTC (permalink / raw)
  To: dev; +Cc: Olivier Matz, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Primarily, it is intended as a way for the mempool driver to provide
additional information on how it lays up objects inside the mempool.

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 lib/librte_mempool/rte_mempool.h     | 31 +++++++++++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c | 15 +++++++++++++++
 2 files changed, 46 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 16d95ae..75630e6 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -218,6 +218,11 @@ struct rte_mempool_memhdr {
 	void *opaque;            /**< Argument passed to the free callback */
 };
 
+/*
+ * Additional information about the mempool
+ */
+struct rte_mempool_info;
+
 /**
  * The RTE mempool structure.
  */
@@ -484,6 +489,13 @@ int rte_mempool_populate_one_by_one(struct rte_mempool *mp,
 		void *vaddr, rte_iova_t iova, size_t len,
 		rte_mempool_populate_obj_cb_t *obj_cb);
 
+/**
+ * Get some additional information about a mempool.
+ */
+typedef int (*rte_mempool_get_info_t)(const struct rte_mempool *mp,
+		struct rte_mempool_info *info);
+
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -502,6 +514,10 @@ struct rte_mempool_ops {
 	 * provided memory chunk.
 	 */
 	rte_mempool_populate_t populate;
+	/**
+	 * Get mempool info
+	 */
+	rte_mempool_get_info_t get_info;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -662,6 +678,21 @@ int rte_mempool_ops_populate(struct rte_mempool *mp, unsigned int max_objs,
 			     rte_mempool_populate_obj_cb_t *obj_cb);
 
 /**
+ * @internal wrapper for mempool_ops get_info callback.
+ *
+ * @param mp [in]
+ *   Pointer to the memory pool.
+ * @param info [out]
+ *   Pointer to the rte_mempool_info structure
+ * @return
+ *   - 0: Success; The mempool driver supports retrieving supplementary
+ *        mempool information
+ *   - -ENOTSUP - doesn't support get_info ops (valid case).
+ */
+int rte_mempool_ops_get_info(const struct rte_mempool *mp,
+			 struct rte_mempool_info *info);
+
+/**
  * @internal wrapper for mempool_ops free callback.
  *
  * @param mp
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 37b0802..949ab43 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -88,6 +88,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->get_count = h->get_count;
 	ops->calc_mem_size = h->calc_mem_size;
 	ops->populate = h->populate;
+	ops->get_info = h->get_info;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -160,6 +161,20 @@ rte_mempool_ops_populate(struct rte_mempool *mp, unsigned int max_objs,
 	return ops->populate(mp, max_objs, vaddr, iova, len, obj_cb);
 }
 
+/* wrapper to get additional mempool info */
+int
+rte_mempool_ops_get_info(const struct rte_mempool *mp,
+			 struct rte_mempool_info *info)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->get_info, -ENOTSUP);
+	return ops->get_info(mp, info);
+}
+
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [RFC v2 15/17] mempool: support block dequeue operation
  2018-01-23 13:15 ` [RFC v2 00/17] " Andrew Rybchenko
                     ` (13 preceding siblings ...)
  2018-01-23 13:16   ` [RFC v2 14/17] mempool: implement abstract mempool info API Andrew Rybchenko
@ 2018-01-23 13:16   ` Andrew Rybchenko
  2018-01-23 13:16   ` [RFC v2 16/17] mempool/bucket: implement " Andrew Rybchenko
                     ` (6 subsequent siblings)
  21 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-01-23 13:16 UTC (permalink / raw)
  To: dev; +Cc: Olivier Matz, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

If mempool manager supports object blocks (physically and virtual
contiguous set of objects), it is sufficient to get the first
object only and the function allows to avoid filling in of
information about each block member.

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 lib/librte_mempool/rte_mempool.h     | 125 ++++++++++++++++++++++++++++++++++-
 lib/librte_mempool/rte_mempool_ops.c |   1 +
 2 files changed, 125 insertions(+), 1 deletion(-)

diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 75630e6..fa216db 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -221,7 +221,10 @@ struct rte_mempool_memhdr {
 /*
  * Additional information about the mempool
  */
-struct rte_mempool_info;
+struct rte_mempool_info {
+	/** Number of objects in the contiguous block */
+	unsigned int contig_block_size;
+};
 
 /**
  * The RTE mempool structure.
@@ -399,6 +402,12 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
 		void **obj_table, unsigned int n);
 
 /**
+ * Dequeue a number of contiquous object blocks from the external pool.
+ */
+typedef int (*rte_mempool_dequeue_contig_blocks_t)(struct rte_mempool *mp,
+		 void **first_obj_table, unsigned int n);
+
+/**
  * Return the number of available objects in the external pool.
  */
 typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
@@ -518,6 +527,10 @@ struct rte_mempool_ops {
 	 * Get mempool info
 	 */
 	rte_mempool_get_info_t get_info;
+	/**
+	 * Dequeue a number of contiguous object blocks.
+	 */
+	rte_mempool_dequeue_contig_blocks_t dequeue_contig_blocks;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -596,6 +609,30 @@ rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
 }
 
 /**
+ * @internal Wrapper for mempool_ops dequeue_contig_blocks callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param first_obj_table
+ *   Pointer to a table of void * pointers (first objects).
+ * @param n
+ *   Number of blocks to get.
+ * @return
+ *   - 0: Success; got n objects.
+ *   - <0: Error; code of dequeue function.
+ */
+static inline int
+rte_mempool_ops_dequeue_contig_blocks(struct rte_mempool *mp,
+		void **first_obj_table, unsigned int n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+	RTE_ASSERT(ops->dequeue_contig_blocks != NULL);
+	return ops->dequeue_contig_blocks(mp, first_obj_table, n);
+}
+
+/**
  * @internal wrapper for mempool_ops enqueue callback.
  *
  * @param mp
@@ -1500,6 +1537,92 @@ rte_mempool_get(struct rte_mempool *mp, void **obj_p)
 }
 
 /**
+ * @internal Get contiguous blocks of objects from the pool. Used internally.
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param first_obj_table
+ *   A pointer to a pointer to the first object in each block.
+ * @param n
+ *   A number of blocks to get.
+ * @return
+ *   - >0: Success
+ *   - <0: Error
+ */
+static __rte_always_inline int
+__mempool_generic_get_contig_blocks(struct rte_mempool *mp,
+				    void **first_obj_table, unsigned int n)
+{
+	int ret;
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	struct rte_mempool_info info;
+	rte_mempool_ops_get_info(mp, &info);
+#endif
+
+	ret = rte_mempool_ops_dequeue_contig_blocks(mp, first_obj_table, n);
+	if (ret < 0)
+		__MEMPOOL_STAT_ADD(mp, get_fail,
+				   n * info.contig_block_size);
+	else
+		__MEMPOOL_STAT_ADD(mp, get_success,
+				   n * info.contig_block_size);
+
+	return ret;
+}
+
+/**
+ * Get a contiguous blocks of objects from the mempool.
+ *
+ * If cache is enabled, consider to flush it first, to reuse objects
+ * as soon as possible.
+ *
+ * The application should check that the driver supports the operation
+ * by calling rte_mempool_ops_get_info() and checking that `contig_block_size`
+ * is not zero.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param first_obj_table
+ *   A pointer to a pointer to the first object in each block.
+ * @param n
+ *   The number of blocks to get from mempool.
+ * @return
+ *   - >0: the size of the block
+ *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
+ *   - -EOPNOTSUPP: The mempool driver does not support block dequeue
+ */
+static __rte_always_inline int
+rte_mempool_get_contig_blocks(struct rte_mempool *mp,
+			      void **first_obj_table, unsigned int n)
+{
+	int ret;
+
+	ret = __mempool_generic_get_contig_blocks(mp, first_obj_table, n);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	if (ret == 0) {
+		struct rte_mempool_info info;
+		const size_t total_elt_sz =
+			mp->header_size + mp->elt_size + mp->trailer_size;
+		unsigned int i, j;
+
+		rte_mempool_ops_get_info(mp, &info);
+
+		for (i = 0; i < n; ++i) {
+			void *first_obj = first_obj_table[i];
+
+			for (j = 0; j < info.contig_block_size; ++j) {
+				void *obj;
+
+				obj = (void *)((uintptr_t)first_obj +
+					       j * total_elt_sz);
+				rte_mempool_check_cookies(mp, &obj, 1, 1);
+			}
+		}
+	}
+#endif
+	return ret;
+}
+
+/**
  * Return the number of entries in the mempool.
  *
  * When cache is enabled, this function has to browse the length of
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 949ab43..9fa8c23 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -89,6 +89,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->calc_mem_size = h->calc_mem_size;
 	ops->populate = h->populate;
 	ops->get_info = h->get_info;
+	ops->dequeue_contig_blocks = h->dequeue_contig_blocks;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [RFC v2 16/17] mempool/bucket: implement block dequeue operation
  2018-01-23 13:15 ` [RFC v2 00/17] " Andrew Rybchenko
                     ` (14 preceding siblings ...)
  2018-01-23 13:16   ` [RFC v2 15/17] mempool: support block dequeue operation Andrew Rybchenko
@ 2018-01-23 13:16   ` Andrew Rybchenko
  2018-01-23 13:16   ` [RFC v2 17/17] mempool/bucket: do not allow one lcore to grab all buckets Andrew Rybchenko
                     ` (5 subsequent siblings)
  21 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-01-23 13:16 UTC (permalink / raw)
  To: dev; +Cc: Olivier Matz, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/mempool/bucket/rte_mempool_bucket.c | 52 +++++++++++++++++++++++++++++
 1 file changed, 52 insertions(+)

diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c
index dc4e1dc..03fccf1 100644
--- a/drivers/mempool/bucket/rte_mempool_bucket.c
+++ b/drivers/mempool/bucket/rte_mempool_bucket.c
@@ -294,6 +294,46 @@ bucket_dequeue(struct rte_mempool *mp, void **obj_table, unsigned int n)
 	return rc;
 }
 
+static int
+bucket_dequeue_contig_blocks(struct rte_mempool *mp, void **first_obj_table,
+			     unsigned int n)
+{
+	struct bucket_data *bd = mp->pool_data;
+	const uint32_t header_size = bd->header_size;
+	struct bucket_stack *cur_stack = bd->buckets[rte_lcore_id()];
+	unsigned int n_buckets_from_stack = RTE_MIN(n, cur_stack->top);
+	struct bucket_header *hdr;
+	void **first_objp = first_obj_table;
+
+	bucket_adopt_orphans(bd);
+
+	n -= n_buckets_from_stack;
+	while (n_buckets_from_stack-- > 0) {
+		hdr = bucket_stack_pop_unsafe(cur_stack);
+		*first_objp++ = (uint8_t *)hdr + header_size;
+	}
+	if (n > 0) {
+		if (unlikely(rte_ring_dequeue_bulk(bd->shared_bucket_ring,
+						   first_objp, n, NULL) != n)) {
+			/* Return the already dequeued buckets */
+			while (first_objp-- != first_obj_table) {
+				bucket_stack_push(cur_stack,
+						  (uint8_t *)*first_objp -
+						  header_size);
+			}
+			rte_errno = ENOBUFS;
+			return -rte_errno;
+		}
+		while (n-- > 0) {
+			hdr = (struct bucket_header *)*first_objp;
+			hdr->lcore_id = rte_lcore_id();
+			*first_objp++ = (uint8_t *)hdr + header_size;
+		}
+	}
+
+	return 0;
+}
+
 static void
 count_underfilled_buckets(struct rte_mempool *mp,
 			  void *opaque,
@@ -546,6 +586,16 @@ bucket_populate(struct rte_mempool *mp, unsigned int max_objs,
 	return n_objs;
 }
 
+static int
+bucket_get_info(const struct rte_mempool *mp, struct rte_mempool_info *info)
+{
+	struct bucket_data *bd = mp->pool_data;
+
+	info->contig_block_size = bd->obj_per_bucket;
+	return 0;
+}
+
+
 static const struct rte_mempool_ops ops_bucket = {
 	.name = "bucket",
 	.alloc = bucket_alloc,
@@ -555,6 +605,8 @@ static const struct rte_mempool_ops ops_bucket = {
 	.get_count = bucket_get_count,
 	.calc_mem_size = bucket_calc_mem_size,
 	.populate = bucket_populate,
+	.get_info = bucket_get_info,
+	.dequeue_contig_blocks = bucket_dequeue_contig_blocks,
 };
 
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [RFC v2 17/17] mempool/bucket: do not allow one lcore to grab all buckets
  2018-01-23 13:15 ` [RFC v2 00/17] " Andrew Rybchenko
                     ` (15 preceding siblings ...)
  2018-01-23 13:16   ` [RFC v2 16/17] mempool/bucket: implement " Andrew Rybchenko
@ 2018-01-23 13:16   ` Andrew Rybchenko
  2018-01-31 16:44   ` [RFC v2 00/17] mempool: add bucket mempool driver Olivier Matz
                     ` (4 subsequent siblings)
  21 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-01-23 13:16 UTC (permalink / raw)
  To: dev; +Cc: Olivier Matz, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/mempool/bucket/rte_mempool_bucket.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c
index 03fccf1..d1e7c27 100644
--- a/drivers/mempool/bucket/rte_mempool_bucket.c
+++ b/drivers/mempool/bucket/rte_mempool_bucket.c
@@ -42,6 +42,7 @@ struct bucket_data {
 	unsigned int header_size;
 	unsigned int total_elt_size;
 	unsigned int obj_per_bucket;
+	unsigned int bucket_stack_thresh;
 	uintptr_t bucket_page_mask;
 	struct rte_ring *shared_bucket_ring;
 	struct bucket_stack *buckets[RTE_MAX_LCORE];
@@ -139,6 +140,7 @@ bucket_enqueue(struct rte_mempool *mp, void * const *obj_table,
 	       unsigned int n)
 {
 	struct bucket_data *bd = mp->pool_data;
+	struct bucket_stack *local_stack = bd->buckets[rte_lcore_id()];
 	unsigned int i;
 	int rc = 0;
 
@@ -146,6 +148,15 @@ bucket_enqueue(struct rte_mempool *mp, void * const *obj_table,
 		rc = bucket_enqueue_single(bd, obj_table[i]);
 		RTE_ASSERT(rc == 0);
 	}
+	if (local_stack->top > bd->bucket_stack_thresh) {
+		rte_ring_enqueue_bulk(bd->shared_bucket_ring,
+				      &local_stack->objects
+				      [bd->bucket_stack_thresh],
+				      local_stack->top -
+				      bd->bucket_stack_thresh,
+				      NULL);
+	    local_stack->top = bd->bucket_stack_thresh;
+	}
 	return rc;
 }
 
@@ -408,6 +419,8 @@ bucket_alloc(struct rte_mempool *mp)
 	bd->obj_per_bucket = (bd->bucket_mem_size - bucket_header_size) /
 		bd->total_elt_size;
 	bd->bucket_page_mask = ~(rte_align64pow2(bd->bucket_mem_size) - 1);
+	/* eventually this should be a tunable parameter */
+	bd->bucket_stack_thresh = (mp->size / bd->obj_per_bucket) * 4 / 3;
 
 	if (mp->flags & MEMPOOL_F_SP_PUT)
 		rg_flags |= RING_F_SP_ENQ;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* Re: [RFC v2 00/17] mempool: add bucket mempool driver
  2018-01-23 13:15 ` [RFC v2 00/17] " Andrew Rybchenko
                     ` (16 preceding siblings ...)
  2018-01-23 13:16   ` [RFC v2 17/17] mempool/bucket: do not allow one lcore to grab all buckets Andrew Rybchenko
@ 2018-01-31 16:44   ` Olivier Matz
  2018-03-10 15:39   ` [PATCH v1 0/9] mempool: prepare to add bucket driver Andrew Rybchenko
                     ` (3 subsequent siblings)
  21 siblings, 0 replies; 197+ messages in thread
From: Olivier Matz @ 2018-01-31 16:44 UTC (permalink / raw)
  To: Andrew Rybchenko
  Cc: dev, Santosh Shukla, Jerin Jacob, Hemant Agrawal, Shreyansh Jain

Hi,

On Tue, Jan 23, 2018 at 01:15:55PM +0000, Andrew Rybchenko wrote:
> The patch series starts from generic enhancements suggested by Olivier.
> Basically it adds driver callbacks to calculate required memory size and
> to populate objects using provided memory area. It allows to remove
> so-called capability flags used before to tell generic code how to
> allocate and slice allocated memory into mempool objects.
> Clean up which removes get_capabilities and register_memory_area is
> not strictly required, but I think right thing to do.
> Existing mempool drivers are updated.
> 
> I've kept rte_mempool_populate_iova_tab() intact since it seems to
> be not directly related XMEM API functions.
> 
> The patch series adds bucket mempool driver which allows to allocate
> (both physically and virtually) contiguous blocks of objects and adds
> mempool API to do it. It is still capable to provide separate objects,
> but it is definitely more heavy-weight than ring/stack drivers.
> The driver will be used by the future Solarflare driver enhancements
> which allow to utilize physical contiguous blocks in the NIC
> hardware/firmware.
> 
> The target usecase is dequeue in blocks and enqueue separate objects
> back (which are collected in buckets to be dequeued). So, the memory
> pool with bucket driver is created by an application and provided to
> networking PMD receive queue. The choice of bucket driver is done using
> rte_eth_dev_pool_ops_supported(). A PMD that relies upon contiguous
> block allocation should report the bucket driver as the only supported
> and preferred one.
> 
> Introduction of the contiguous block dequeue operation is proven by
> performance measurements using autotest with minor enhancements:
>  - in the original test bulks are powers of two, which is unacceptable
>    for us, so they are changed to multiple of contig_block_size;
>  - the test code is duplicated to support plain dequeue and
>    dequeue_contig_blocks;
>  - all the extra test variations (with/without cache etc) are eliminated;
>  - a fake read from the dequeued buffer is added (in both cases) to
>    simulate mbufs access.
> 
> start performance test for bucket (without cache)
> mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Srate_persec=   111935488
> mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Srate_persec=   115290931
> mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Srate_persec=   353055539
> mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Srate_persec=   353330790
> mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Srate_persec=   224657407
> mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Srate_persec=   230411468
> mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Srate_persec=   706700902
> mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Srate_persec=   703673139
> mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Srate_persec=   425236887
> mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Srate_persec=   437295512
> mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Srate_persec=  1343409356
> mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Srate_persec=  1336567397
> start performance test for bucket (without cache + contiguous dequeue)
> mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Crate_persec=   122945536
> mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Crate_persec=   126458265
> mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Crate_persec=   374262988
> mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Crate_persec=   377316966
> mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Crate_persec=   244842496
> mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Crate_persec=   251618917
> mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Crate_persec=   751226060
> mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Crate_persec=   756233010
> mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Crate_persec=   462068120
> mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Crate_persec=   476997221
> mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Crate_persec=  1432171313
> mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Crate_persec=  1438829771
> 
> The number of objects in the contiguous block is a function of bucket
> memory size (.config option) and total element size. In the future
> additional API with possibility to pass parameters on mempool allocation
> may be added.
> 
> It breaks ABI since changes rte_mempool_ops. Also it removes
> rte_mempool_ops_register_memory_area() and
> rte_mempool_ops_get_capabilities() since corresponding callbacks are
> removed.
> 
> The target DPDK release is 18.05.
> 
> v2:
>   - add driver ops to calculate required memory size and populate
>     mempool objects, remove extra flags which were required before
>     to control it
>   - transition of octeontx and dpaa drivers to the new callbacks
>   - change info API to get information from driver required to
>     API user to know contiguous block size
>   - remove get_capabilities (not required any more and may be
>     substituted with more in info get API)
>   - remove register_memory_area since it is substituted with
>     populate callback which can do more
>   - use SPDX tags
>   - avoid all objects affinity to single lcore
>   - fix bucket get_count
>   - deprecate XMEM API
>   - avoid introduction of a new function to flush cache
>   - fix NO_CACHE_ALIGN case in bucket mempool
> 
> Andrew Rybchenko (10):
>   mempool: fix phys contig check if populate default skipped
>   mempool: add op to calculate memory size to be allocated
>   mempool/octeontx: add callback to calculate memory size
>   mempool: add op to populate objects using provided memory
>   mempool/octeontx: implement callback to populate objects
>   mempool: remove callback to get capabilities
>   mempool: deprecate xmem functions
>   mempool/octeontx: prepare to remove register memory area op
>   mempool/dpaa: convert to use populate driver op
>   mempool: remove callback to register memory area
> 
> Artem V. Andreev (7):
>   mempool: ensure the mempool is initialized before populating
>   mempool/bucket: implement bucket mempool manager
>   mempool: support flushing the default cache of the mempool
>   mempool: implement abstract mempool info API
>   mempool: support block dequeue operation
>   mempool/bucket: implement block dequeue operation
>   mempool/bucket: do not allow one lcore to grab all buckets
> 
>  MAINTAINERS                                        |   9 +
>  config/common_base                                 |   2 +
>  drivers/mempool/Makefile                           |   1 +
>  drivers/mempool/bucket/Makefile                    |  27 +
>  drivers/mempool/bucket/rte_mempool_bucket.c        | 626 +++++++++++++++++++++
>  .../mempool/bucket/rte_mempool_bucket_version.map  |   4 +
>  drivers/mempool/dpaa/dpaa_mempool.c                |  13 +-
>  drivers/mempool/octeontx/rte_mempool_octeontx.c    |  63 ++-
>  lib/librte_mempool/rte_mempool.c                   | 192 ++++---
>  lib/librte_mempool/rte_mempool.h                   | 366 +++++++++---
>  lib/librte_mempool/rte_mempool_ops.c               |  48 +-
>  lib/librte_mempool/rte_mempool_version.map         |  11 +-
>  mk/rte.app.mk                                      |   1 +
>  13 files changed, 1184 insertions(+), 179 deletions(-)
>  create mode 100644 drivers/mempool/bucket/Makefile
>  create mode 100644 drivers/mempool/bucket/rte_mempool_bucket.c
>  create mode 100644 drivers/mempool/bucket/rte_mempool_bucket_version.map

Globally, the RFC looks fine to me. Thanks for this good work.

I didn't review the mempool/bucket part like I did last time. About the
changes to the mempool API, I think it's a good enhancement: it makes
things more flexible and removes complexity in the common code. Some
points may still need some discussions, for instance how the PMDs and
applications take advantage of block dequeue operations and get_info().

I have some specific comments that are sent directly as replies to the
patches.

Since it changes dpaa and octeontx, having feedback from people from NXP
and Cavium Networks would be good.

Thanks,
Olivier

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC v2 01/17] mempool: fix phys contig check if populate default skipped
  2018-01-23 13:15   ` [RFC v2 01/17] mempool: fix phys contig check if populate default skipped Andrew Rybchenko
@ 2018-01-31 16:45     ` Olivier Matz
  2018-02-01  5:05       ` santosh
  2018-02-01 14:02     ` [PATCH] " Andrew Rybchenko
  1 sibling, 1 reply; 197+ messages in thread
From: Olivier Matz @ 2018-01-31 16:45 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev, stable, santosh.shukla, jerin.jacob

On Tue, Jan 23, 2018 at 01:15:56PM +0000, Andrew Rybchenko wrote:
> There is not specified dependency between rte_mempool_populate_default()
> and rte_mempool_populate_iova(). So, the second should not rely on the
> fact that the first adds capability flags to the mempool flags.
> 
> Fixes: 65cf769f5e6a ("mempool: detect physical contiguous objects")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>

Looks good to me. I agree it's strange that the mp->flags are
updated with capabilities only in rte_mempool_populate_default().
I see that this behavior is removed later in the patchset since the
get_capa() is removed!

However maybe this single patch could go in 18.02.
+Santosh +Jerin since it's mostly about Octeon.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC v2 02/17] mempool: add op to calculate memory size to be allocated
  2018-01-23 13:15   ` [RFC v2 02/17] mempool: add op to calculate memory size to be allocated Andrew Rybchenko
@ 2018-01-31 16:45     ` Olivier Matz
  2018-02-01  7:15       ` Andrew Rybchenko
  0 siblings, 1 reply; 197+ messages in thread
From: Olivier Matz @ 2018-01-31 16:45 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev

On Tue, Jan 23, 2018 at 01:15:57PM +0000, Andrew Rybchenko wrote:
> Size of memory chunk required to populate mempool objects depends
> on how objects are stored in the memory. Different mempool drivers
> may have different requirements and a new operation allows to
> calculate memory size in accordance with driver requirements and
> advertise requirements on minimum memory chunk size and alignment
> in a generic way.
> 
> Suggested-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>

The general idea is fine. Few small comments below.

[...]

> ---
>  lib/librte_mempool/rte_mempool.c           | 95 ++++++++++++++++++++++--------
>  lib/librte_mempool/rte_mempool.h           | 63 +++++++++++++++++++-
>  lib/librte_mempool/rte_mempool_ops.c       | 18 ++++++
>  lib/librte_mempool/rte_mempool_version.map |  8 +++
>  4 files changed, 159 insertions(+), 25 deletions(-)
> 
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index e783b9a..1f54f95 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -233,13 +233,14 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
>  	return sz->total_size;
>  }
>  
> -
>  /*
> - * Calculate maximum amount of memory required to store given number of objects.
> + * Internal function to calculate required memory chunk size shared
> + * by default implementation of the corresponding callback and
> + * deprecated external function.
>   */
> -size_t
> -rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
> -		      unsigned int flags)
> +static size_t
> +rte_mempool_xmem_size_int(uint32_t elt_num, size_t total_elt_sz,
> +			  uint32_t pg_shift, unsigned int flags)
>  {

I'm not getting why the function is changed to a static function
in this patch, given that rte_mempool_xmem_size() is redefined
below as a simple wrapper.


>  	size_t obj_per_page, pg_num, pg_sz;
>  	unsigned int mask;
> @@ -264,6 +265,49 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
>  	return pg_num << pg_shift;
>  }
>  
> +ssize_t
> +rte_mempool_calc_mem_size_def(const struct rte_mempool *mp,
> +			      uint32_t obj_num, uint32_t pg_shift,
> +			      size_t *min_chunk_size,
> +			      __rte_unused size_t *align)
> +{
> +	unsigned int mp_flags;
> +	int ret;
> +	size_t total_elt_sz;
> +	size_t mem_size;
> +
> +	/* Get mempool capabilities */
> +	mp_flags = 0;
> +	ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
> +	if ((ret < 0) && (ret != -ENOTSUP))
> +		return ret;
> +
> +	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
> +
> +	mem_size = rte_mempool_xmem_size_int(obj_num, total_elt_sz, pg_shift,
> +					     mp->flags | mp_flags);
> +
> +	if (mp_flags & MEMPOOL_F_CAPA_PHYS_CONTIG)
> +		*min_chunk_size = mem_size;
> +	else
> +		*min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz);
> +
> +	/* No extra align requirements by default */

maybe set *align = 0 ?
I think it's not sane to keep the variable uninitialized.

[...]

> +/**
> + * Calculate memory size required to store specified number of objects.
> + *
> + * Note that if object size is bigger then page size, then it assumes
> + * that pages are grouped in subsets of physically continuous pages big
> + * enough to store at least one object.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @param obj_num
> + *   Number of objects.
> + * @param pg_shift
> + *   LOG2 of the physical pages size. If set to 0, ignore page boundaries.
> + * @param min_chunk_size
> + *   Location for minimum size of the memory chunk which may be used to
> + *   store memory pool objects.
> + * @param align
> + *   Location with required memory chunk alignment.
> + * @return
> + *   Required memory size aligned at page boundary.
> + */
> +typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp,
> +		uint32_t obj_num,  uint32_t pg_shift,
> +		size_t *min_chunk_size, size_t *align);

The API comment can be enhanced by saying that min_chunk_size and align
are output only parameters. For align, the '0' value could be described
as well.


> +
> +/**
> + * Default way to calculate memory size required to store specified
> + * number of objects.
> + */
> +ssize_t rte_mempool_calc_mem_size_def(const struct rte_mempool *mp,
> +				      uint32_t obj_num, uint32_t pg_shift,
> +				      size_t *min_chunk_size, size_t *align);
> +

The behavior of the default function could be better explained.
I would prefer "default" instead of "def".

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC v2 04/17] mempool: add op to populate objects using provided memory
  2018-01-23 13:15   ` [RFC v2 04/17] mempool: add op to populate objects using provided memory Andrew Rybchenko
@ 2018-01-31 16:45     ` Olivier Matz
  2018-02-01  8:51       ` Andrew Rybchenko
  0 siblings, 1 reply; 197+ messages in thread
From: Olivier Matz @ 2018-01-31 16:45 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev

On Tue, Jan 23, 2018 at 01:15:59PM +0000, Andrew Rybchenko wrote:
> The callback allows to customize how objects are stored in the
> memory chunk. Default implementation of the callback which simply
> puts objects one by one is available.
> 
> Suggested-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>

...

>  
> +int
> +rte_mempool_populate_one_by_one(struct rte_mempool *mp, unsigned int max_objs,
> +		void *vaddr, rte_iova_t iova, size_t len,
> +		rte_mempool_populate_obj_cb_t *obj_cb)

We shall find a better name for this function.
Unfortunatly rte_mempool_populate_default() already exists...

I'm also wondering if having a file rte_mempool_ops_default.c
with all the default behaviors would make sense?

...

> @@ -466,16 +487,13 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>  	else
>  		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
>  
> -	while (off + total_elt_sz <= len && mp->populated_size < mp->size) {
> -		off += mp->header_size;
> -		if (iova == RTE_BAD_IOVA)
> -			mempool_add_elem(mp, (char *)vaddr + off,
> -				RTE_BAD_IOVA);
> -		else
> -			mempool_add_elem(mp, (char *)vaddr + off, iova + off);
> -		off += mp->elt_size + mp->trailer_size;
> -		i++;
> -	}
> +	if (off > len)
> +		return -EINVAL;
> +
> +	i = rte_mempool_ops_populate(mp, mp->size - mp->populated_size,
> +		(char *)vaddr + off,
> +		(iova == RTE_BAD_IOVA) ? RTE_BAD_IOVA : (iova + off),
> +		len - off, mempool_add_elem);

My initial idea was to provide populate_iova(), populate_virt(), ...
as mempool ops. I don't see any strong requirement for doing it now, but
on the other hand it would break the API to do it later. What's
your opinion?

Also, I see that mempool_add_elem() is passed as a callback to
rte_mempool_ops_populate(). Instead, would it make sense to
export mempool_add_elem() and let the implementation of populate()
ops to call it?

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC v2 11/17] mempool: ensure the mempool is initialized before populating
  2018-01-23 13:16   ` [RFC v2 11/17] mempool: ensure the mempool is initialized before populating Andrew Rybchenko
@ 2018-01-31 16:45     ` Olivier Matz
  2018-02-01  8:53       ` Andrew Rybchenko
  0 siblings, 1 reply; 197+ messages in thread
From: Olivier Matz @ 2018-01-31 16:45 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev, Artem V. Andreev

On Tue, Jan 23, 2018 at 01:16:06PM +0000, Andrew Rybchenko wrote:
> From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>
> 
> Callback to calculate required memory area size may require mempool
> driver data to be already allocated and initialized.
> 
> Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> ---
>  lib/librte_mempool/rte_mempool.c | 29 ++++++++++++++++++++++-------
>  1 file changed, 22 insertions(+), 7 deletions(-)
> 
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index fc9c95a..cbb4dd5 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -370,6 +370,21 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
>  	}
>  }
>  
> +static int
> +mempool_maybe_initialize(struct rte_mempool *mp)
> +{
> +	int ret;
> +
> +	/* create the internal ring if not already done */
> +	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
> +		ret = rte_mempool_ops_alloc(mp);
> +		if (ret != 0)
> +			return ret;
> +		mp->flags |= MEMPOOL_F_POOL_CREATED;
> +	}
> +	return 0;
> +}

mempool_ops_alloc_once() ?

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC v2 01/17] mempool: fix phys contig check if populate default skipped
  2018-01-31 16:45     ` Olivier Matz
@ 2018-02-01  5:05       ` santosh
  2018-02-01  6:54         ` Andrew Rybchenko
  0 siblings, 1 reply; 197+ messages in thread
From: santosh @ 2018-02-01  5:05 UTC (permalink / raw)
  To: Olivier Matz, Andrew Rybchenko; +Cc: dev, stable, jerin.jacob


On Wednesday 31 January 2018 10:15 PM, Olivier Matz wrote:
> On Tue, Jan 23, 2018 at 01:15:56PM +0000, Andrew Rybchenko wrote:
>> There is not specified dependency between rte_mempool_populate_default()
>> and rte_mempool_populate_iova(). So, the second should not rely on the
>> fact that the first adds capability flags to the mempool flags.
>>
>> Fixes: 65cf769f5e6a ("mempool: detect physical contiguous objects")
>> Cc: stable@dpdk.org
>>
>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> Looks good to me. I agree it's strange that the mp->flags are
> updated with capabilities only in rte_mempool_populate_default().
> I see that this behavior is removed later in the patchset since the
> get_capa() is removed!
>
> However maybe this single patch could go in 18.02.
> +Santosh +Jerin since it's mostly about Octeon.

rte_mempool_xmem_size should return correct size if MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag
is set in 'mp->flags'. Thats why _ops_get_capabilities() called in _populate_default() but not
at _populate_iova().
I think, this 'alone' patch may break octeontx mempool.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC v2 01/17] mempool: fix phys contig check if populate default skipped
  2018-02-01  5:05       ` santosh
@ 2018-02-01  6:54         ` Andrew Rybchenko
  2018-02-01  9:09           ` santosh
  0 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-02-01  6:54 UTC (permalink / raw)
  To: santosh, Olivier Matz; +Cc: dev, stable, jerin.jacob

On 02/01/2018 08:05 AM, santosh wrote:
> On Wednesday 31 January 2018 10:15 PM, Olivier Matz wrote:
>> On Tue, Jan 23, 2018 at 01:15:56PM +0000, Andrew Rybchenko wrote:
>>> There is not specified dependency between rte_mempool_populate_default()
>>> and rte_mempool_populate_iova(). So, the second should not rely on the
>>> fact that the first adds capability flags to the mempool flags.
>>>
>>> Fixes: 65cf769f5e6a ("mempool: detect physical contiguous objects")
>>> Cc: stable@dpdk.org
>>>
>>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>> Looks good to me. I agree it's strange that the mp->flags are
>> updated with capabilities only in rte_mempool_populate_default().
>> I see that this behavior is removed later in the patchset since the
>> get_capa() is removed!
>>
>> However maybe this single patch could go in 18.02.
>> +Santosh +Jerin since it's mostly about Octeon.
> rte_mempool_xmem_size should return correct size if MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag
> is set in 'mp->flags'. Thats why _ops_get_capabilities() called in _populate_default() but not
> at _populate_iova().
> I think, this 'alone' patch may break octeontx mempool.

The patch does not touch rte_mempool_populate_default().
_ops_get_capabilities() is still called there before
rte_mempool_xmem_size(). The theoretical problem which
the patch tries to fix is the case when
rte_mempool_populate_default() is not called at all. I.e. application
calls _ops_get_capabilities() to get flags, then, together with
mp->flags, calls rte_mempool_xmem_size() directly, allocates
calculated amount of memory and calls _populate_iova().

Since later patches of the series reconsider memory size
calculation etc, it is up to you if it makes sense to apply it
in 18.02 as a fix.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC v2 02/17] mempool: add op to calculate memory size to be allocated
  2018-01-31 16:45     ` Olivier Matz
@ 2018-02-01  7:15       ` Andrew Rybchenko
  0 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-02-01  7:15 UTC (permalink / raw)
  To: Olivier Matz; +Cc: dev

On 01/31/2018 07:45 PM, Olivier Matz wrote:
> On Tue, Jan 23, 2018 at 01:15:57PM +0000, Andrew Rybchenko wrote:
>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
>> index e783b9a..1f54f95 100644
>> --- a/lib/librte_mempool/rte_mempool.c
>> +++ b/lib/librte_mempool/rte_mempool.c
>> @@ -233,13 +233,14 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
>>   	return sz->total_size;
>>   }
>>   
>> -
>>   /*
>> - * Calculate maximum amount of memory required to store given number of objects.
>> + * Internal function to calculate required memory chunk size shared
>> + * by default implementation of the corresponding callback and
>> + * deprecated external function.
>>    */
>> -size_t
>> -rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
>> -		      unsigned int flags)
>> +static size_t
>> +rte_mempool_xmem_size_int(uint32_t elt_num, size_t total_elt_sz,
>> +			  uint32_t pg_shift, unsigned int flags)
>>   {
> I'm not getting why the function is changed to a static function
> in this patch, given that rte_mempool_xmem_size() is redefined
> below as a simple wrapper.

rte_mempool_xmem_size() is deprecated in the subsequent patch.
This static function is created to reuse the code and avoid usage of
the deprecated in non-deprecated. Yes, it is definitely unclear here and
now I think it is better to make the change in the patch which
deprecates rte_mempool_xmem_size().

>>   	size_t obj_per_page, pg_num, pg_sz;
>>   	unsigned int mask;
>> @@ -264,6 +265,49 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
>>   	return pg_num << pg_shift;
>>   }
>>   
>> +ssize_t
>> +rte_mempool_calc_mem_size_def(const struct rte_mempool *mp,
>> +			      uint32_t obj_num, uint32_t pg_shift,
>> +			      size_t *min_chunk_size,
>> +			      __rte_unused size_t *align)
>> +{
>> +	unsigned int mp_flags;
>> +	int ret;
>> +	size_t total_elt_sz;
>> +	size_t mem_size;
>> +
>> +	/* Get mempool capabilities */
>> +	mp_flags = 0;
>> +	ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
>> +	if ((ret < 0) && (ret != -ENOTSUP))
>> +		return ret;
>> +
>> +	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
>> +
>> +	mem_size = rte_mempool_xmem_size_int(obj_num, total_elt_sz, pg_shift,
>> +					     mp->flags | mp_flags);
>> +
>> +	if (mp_flags & MEMPOOL_F_CAPA_PHYS_CONTIG)
>> +		*min_chunk_size = mem_size;
>> +	else
>> +		*min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz);
>> +
>> +	/* No extra align requirements by default */
> maybe set *align = 0 ?
> I think it's not sane to keep the variable uninitialized.

Right now align is in/out. On input it is either cacheline (has hugepages)
or page size. These external limitations could be important for size
calculations. _ops_calc_mem_size() is allowed to strengthen the
alignment only. If it is acceptable, I'll try to highlight it in the 
description
and check it in the code.
May be more transparent solution is to make it input-only and
highlight that it is full responsibility of the callback to care about
all alignment requirements (e.g. imposed by absence of huge pages).
What do you think?

> [...]
>
>> +/**
>> + * Calculate memory size required to store specified number of objects.
>> + *
>> + * Note that if object size is bigger then page size, then it assumes
>> + * that pages are grouped in subsets of physically continuous pages big
>> + * enough to store at least one object.
>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + * @param obj_num
>> + *   Number of objects.
>> + * @param pg_shift
>> + *   LOG2 of the physical pages size. If set to 0, ignore page boundaries.
>> + * @param min_chunk_size
>> + *   Location for minimum size of the memory chunk which may be used to
>> + *   store memory pool objects.
>> + * @param align
>> + *   Location with required memory chunk alignment.
>> + * @return
>> + *   Required memory size aligned at page boundary.
>> + */
>> +typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp,
>> +		uint32_t obj_num,  uint32_t pg_shift,
>> +		size_t *min_chunk_size, size_t *align);
> The API comment can be enhanced by saying that min_chunk_size and align
> are output only parameters. For align, the '0' value could be described
> as well.

OK, will fix as soon as we decide if align is input only or input/output.

>> +
>> +/**
>> + * Default way to calculate memory size required to store specified
>> + * number of objects.
>> + */
>> +ssize_t rte_mempool_calc_mem_size_def(const struct rte_mempool *mp,
>> +				      uint32_t obj_num, uint32_t pg_shift,
>> +				      size_t *min_chunk_size, size_t *align);
>> +
> The behavior of the default function could be better explained.
> I would prefer "default" instead of "def".

Will do.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC v2 04/17] mempool: add op to populate objects using provided memory
  2018-01-31 16:45     ` Olivier Matz
@ 2018-02-01  8:51       ` Andrew Rybchenko
  0 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-02-01  8:51 UTC (permalink / raw)
  To: Olivier Matz; +Cc: dev

On 01/31/2018 07:45 PM, Olivier Matz wrote:
> On Tue, Jan 23, 2018 at 01:15:59PM +0000, Andrew Rybchenko wrote:
>> The callback allows to customize how objects are stored in the
>> memory chunk. Default implementation of the callback which simply
>> puts objects one by one is available.
>>
>> Suggested-by: Olivier Matz <olivier.matz@6wind.com>
>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> ...
>
>>   
>> +int
>> +rte_mempool_populate_one_by_one(struct rte_mempool *mp, unsigned int max_objs,
>> +		void *vaddr, rte_iova_t iova, size_t len,
>> +		rte_mempool_populate_obj_cb_t *obj_cb)
> We shall find a better name for this function.
> Unfortunatly rte_mempool_populate_default() already exists...

I have no better idea right now, but we'll try in the next version.
May be rte_mempool_op_populate_default()?

> I'm also wondering if having a file rte_mempool_ops_default.c
> with all the default behaviors would make sense?

I think it is a good idea. Will do.

> ...
>
>> @@ -466,16 +487,13 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>>   	else
>>   		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
>>   
>> -	while (off + total_elt_sz <= len && mp->populated_size < mp->size) {
>> -		off += mp->header_size;
>> -		if (iova == RTE_BAD_IOVA)
>> -			mempool_add_elem(mp, (char *)vaddr + off,
>> -				RTE_BAD_IOVA);
>> -		else
>> -			mempool_add_elem(mp, (char *)vaddr + off, iova + off);
>> -		off += mp->elt_size + mp->trailer_size;
>> -		i++;
>> -	}
>> +	if (off > len)
>> +		return -EINVAL;
>> +
>> +	i = rte_mempool_ops_populate(mp, mp->size - mp->populated_size,
>> +		(char *)vaddr + off,
>> +		(iova == RTE_BAD_IOVA) ? RTE_BAD_IOVA : (iova + off),
>> +		len - off, mempool_add_elem);
> My initial idea was to provide populate_iova(), populate_virt(), ...
> as mempool ops. I don't see any strong requirement for doing it now, but
> on the other hand it would break the API to do it later. What's
> your opinion?

Suggested solution keeps only generic house-keeping inside
rte_mempool_populate_iova() (driver-data alloc/init, generic
check if the pool is already populated, maintenance of the memory
chunks list and object cache-alignment requirements). I think that
only the last item is questionable, but cache-line alignment is
hard-wired in object size calculation as well which is not
customizable yet. May be we should add callback for object size
calculation with default fallback and move object cache-line
alignment into populate() callback.

As for populate_virt() etc right now all these functions finally
come to populate_iova(). I have no customization usecases
for these functions in my mind, so it is hard to guess required
set of parameters. That's why I kept it as is for now.
(In general I prefer to avoid overkill solutions since chances
of success (100% guess of the prototype) are small)

May be someone else on the list have usecases in mind?

> Also, I see that mempool_add_elem() is passed as a callback to
> rte_mempool_ops_populate(). Instead, would it make sense to
> export mempool_add_elem() and let the implementation of populate()
> ops to call it?

I think callback gives a bit more freedom and allows to pass own
function which does some actions (e.g. filtering) per object.
In fact I think opaque parameter should be added to the callback
prototype to make it really useful for customization (to provide
specific context and make it possible to chain callbacks).

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC v2 11/17] mempool: ensure the mempool is initialized before populating
  2018-01-31 16:45     ` Olivier Matz
@ 2018-02-01  8:53       ` Andrew Rybchenko
  0 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-02-01  8:53 UTC (permalink / raw)
  To: Olivier Matz; +Cc: dev, Artem V. Andreev

On 01/31/2018 07:45 PM, Olivier Matz wrote:
> On Tue, Jan 23, 2018 at 01:16:06PM +0000, Andrew Rybchenko wrote:
>> From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>
>>
>> Callback to calculate required memory area size may require mempool
>> driver data to be already allocated and initialized.
>>
>> Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>> ---
>>   lib/librte_mempool/rte_mempool.c | 29 ++++++++++++++++++++++-------
>>   1 file changed, 22 insertions(+), 7 deletions(-)
>>
>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
>> index fc9c95a..cbb4dd5 100644
>> --- a/lib/librte_mempool/rte_mempool.c
>> +++ b/lib/librte_mempool/rte_mempool.c
>> @@ -370,6 +370,21 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
>>   	}
>>   }
>>   
>> +static int
>> +mempool_maybe_initialize(struct rte_mempool *mp)
>> +{
>> +	int ret;
>> +
>> +	/* create the internal ring if not already done */
>> +	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
>> +		ret = rte_mempool_ops_alloc(mp);
>> +		if (ret != 0)
>> +			return ret;
>> +		mp->flags |= MEMPOOL_F_POOL_CREATED;
>> +	}
>> +	return 0;
>> +}
> mempool_ops_alloc_once() ?

Yes, I like it. Will fix.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC v2 01/17] mempool: fix phys contig check if populate default skipped
  2018-02-01  6:54         ` Andrew Rybchenko
@ 2018-02-01  9:09           ` santosh
  2018-02-01  9:18             ` Andrew Rybchenko
  0 siblings, 1 reply; 197+ messages in thread
From: santosh @ 2018-02-01  9:09 UTC (permalink / raw)
  To: Andrew Rybchenko, Olivier Matz; +Cc: dev, stable, jerin.jacob


On Thursday 01 February 2018 12:24 PM, Andrew Rybchenko wrote:
> On 02/01/2018 08:05 AM, santosh wrote:
>> On Wednesday 31 January 2018 10:15 PM, Olivier Matz wrote:
>>> On Tue, Jan 23, 2018 at 01:15:56PM +0000, Andrew Rybchenko wrote:
>>>> There is not specified dependency between rte_mempool_populate_default()
>>>> and rte_mempool_populate_iova(). So, the second should not rely on the
>>>> fact that the first adds capability flags to the mempool flags.
>>>>
>>>> Fixes: 65cf769f5e6a ("mempool: detect physical contiguous objects")
>>>> Cc: stable@dpdk.org
>>>>
>>>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>>> Looks good to me. I agree it's strange that the mp->flags are
>>> updated with capabilities only in rte_mempool_populate_default().
>>> I see that this behavior is removed later in the patchset since the
>>> get_capa() is removed!
>>>
>>> However maybe this single patch could go in 18.02.
>>> +Santosh +Jerin since it's mostly about Octeon.
>> rte_mempool_xmem_size should return correct size if MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag
>> is set in 'mp->flags'. Thats why _ops_get_capabilities() called in _populate_default() but not
>> at _populate_iova().
>> I think, this 'alone' patch may break octeontx mempool.
>
> The patch does not touch rte_mempool_populate_default().
> _ops_get_capabilities() is still called there before
> rte_mempool_xmem_size(). The theoretical problem which
> the patch tries to fix is the case when
> rte_mempool_populate_default() is not called at all. I.e. application
> calls _ops_get_capabilities() to get flags, then, together with
> mp->flags, calls rte_mempool_xmem_size() directly, allocates
> calculated amount of memory and calls _populate_iova().
>
In that case, Application does like below:

	/* Get mempool capabilities */
	mp_flags = 0;
	ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
	if ((ret < 0) && (ret != -ENOTSUP))
		return ret;

	/* update mempool capabilities */
	mp->flags |= mp_flags;

	/* calc xmem sz */
	size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift,
					mp->flags);

	/* rsrv memory */
	mz = rte_memzone_reserve_aligned(mz_name, size,...);

	/* now populate iova */
	ret = rte_mempool_populate_iova(mp,,..);

won't it work?

However I understand that clubbing `_get_ops_capa() + flag-updation` into _populate_iova()
perhaps better from user PoV.

> Since later patches of the series reconsider memory size
> calculation etc, it is up to you if it makes sense to apply it
> in 18.02 as a fix.
>

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC v2 01/17] mempool: fix phys contig check if populate default skipped
  2018-02-01  9:09           ` santosh
@ 2018-02-01  9:18             ` Andrew Rybchenko
  2018-02-01  9:30               ` santosh
  0 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-02-01  9:18 UTC (permalink / raw)
  To: santosh, Andrew Rybchenko, Olivier Matz; +Cc: dev, stable, jerin.jacob

On 02/01/2018 12:09 PM, santosh wrote:
> On Thursday 01 February 2018 12:24 PM, Andrew Rybchenko wrote:
>> On 02/01/2018 08:05 AM, santosh wrote:
>>> On Wednesday 31 January 2018 10:15 PM, Olivier Matz wrote:
>>>> On Tue, Jan 23, 2018 at 01:15:56PM +0000, Andrew Rybchenko wrote:
>>>>> There is not specified dependency between rte_mempool_populate_default()
>>>>> and rte_mempool_populate_iova(). So, the second should not rely on the
>>>>> fact that the first adds capability flags to the mempool flags.
>>>>>
>>>>> Fixes: 65cf769f5e6a ("mempool: detect physical contiguous objects")
>>>>> Cc: stable@dpdk.org
>>>>>
>>>>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>>>> Looks good to me. I agree it's strange that the mp->flags are
>>>> updated with capabilities only in rte_mempool_populate_default().
>>>> I see that this behavior is removed later in the patchset since the
>>>> get_capa() is removed!
>>>>
>>>> However maybe this single patch could go in 18.02.
>>>> +Santosh +Jerin since it's mostly about Octeon.
>>> rte_mempool_xmem_size should return correct size if MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag
>>> is set in 'mp->flags'. Thats why _ops_get_capabilities() called in _populate_default() but not
>>> at _populate_iova().
>>> I think, this 'alone' patch may break octeontx mempool.
>> The patch does not touch rte_mempool_populate_default().
>> _ops_get_capabilities() is still called there before
>> rte_mempool_xmem_size(). The theoretical problem which
>> the patch tries to fix is the case when
>> rte_mempool_populate_default() is not called at all. I.e. application
>> calls _ops_get_capabilities() to get flags, then, together with
>> mp->flags, calls rte_mempool_xmem_size() directly, allocates
>> calculated amount of memory and calls _populate_iova().
>>
> In that case, Application does like below:
>
> 	/* Get mempool capabilities */
> 	mp_flags = 0;
> 	ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
> 	if ((ret < 0) && (ret != -ENOTSUP))
> 		return ret;
>
> 	/* update mempool capabilities */
> 	mp->flags |= mp_flags;

Above line is not mandatory. "mp->flags | mp_flags" could be simply
passed to  rte_mempool_xmem_size() below.

> 	/* calc xmem sz */
> 	size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift,
> 					mp->flags);
>
> 	/* rsrv memory */
> 	mz = rte_memzone_reserve_aligned(mz_name, size,...);
>
> 	/* now populate iova */
> 	ret = rte_mempool_populate_iova(mp,,..);
>
> won't it work?
>
> However I understand that clubbing `_get_ops_capa() + flag-updation` into _populate_iova()
> perhaps better from user PoV.
>
>> Since later patches of the series reconsider memory size
>> calculation etc, it is up to you if it makes sense to apply it
>> in 18.02 as a fix.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC v2 01/17] mempool: fix phys contig check if populate default skipped
  2018-02-01  9:18             ` Andrew Rybchenko
@ 2018-02-01  9:30               ` santosh
  2018-02-01 10:00                 ` Andrew Rybchenko
  0 siblings, 1 reply; 197+ messages in thread
From: santosh @ 2018-02-01  9:30 UTC (permalink / raw)
  To: Andrew Rybchenko, Olivier Matz; +Cc: dev, stable, jerin.jacob


On Thursday 01 February 2018 02:48 PM, Andrew Rybchenko wrote:
> On 02/01/2018 12:09 PM, santosh wrote:
>> On Thursday 01 February 2018 12:24 PM, Andrew Rybchenko wrote:
>>> On 02/01/2018 08:05 AM, santosh wrote:
>>>> On Wednesday 31 January 2018 10:15 PM, Olivier Matz wrote:
>>>>> On Tue, Jan 23, 2018 at 01:15:56PM +0000, Andrew Rybchenko wrote:
>>>>>> There is not specified dependency between rte_mempool_populate_default()
>>>>>> and rte_mempool_populate_iova(). So, the second should not rely on the
>>>>>> fact that the first adds capability flags to the mempool flags.
>>>>>>
>>>>>> Fixes: 65cf769f5e6a ("mempool: detect physical contiguous objects")
>>>>>> Cc: stable@dpdk.org
>>>>>>
>>>>>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>>>>> Looks good to me. I agree it's strange that the mp->flags are
>>>>> updated with capabilities only in rte_mempool_populate_default().
>>>>> I see that this behavior is removed later in the patchset since the
>>>>> get_capa() is removed!
>>>>>
>>>>> However maybe this single patch could go in 18.02.
>>>>> +Santosh +Jerin since it's mostly about Octeon.
>>>> rte_mempool_xmem_size should return correct size if MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag
>>>> is set in 'mp->flags'. Thats why _ops_get_capabilities() called in _populate_default() but not
>>>> at _populate_iova().
>>>> I think, this 'alone' patch may break octeontx mempool.
>>> The patch does not touch rte_mempool_populate_default().
>>> _ops_get_capabilities() is still called there before
>>> rte_mempool_xmem_size(). The theoretical problem which
>>> the patch tries to fix is the case when
>>> rte_mempool_populate_default() is not called at all. I.e. application
>>> calls _ops_get_capabilities() to get flags, then, together with
>>> mp->flags, calls rte_mempool_xmem_size() directly, allocates
>>> calculated amount of memory and calls _populate_iova().
>>>
>> In that case, Application does like below:
>>
>>     /* Get mempool capabilities */
>>     mp_flags = 0;
>>     ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
>>     if ((ret < 0) && (ret != -ENOTSUP))
>>         return ret;
>>
>>     /* update mempool capabilities */
>>     mp->flags |= mp_flags;
>
> Above line is not mandatory. "mp->flags | mp_flags" could be simply
> passed to  rte_mempool_xmem_size() below.
>
That depends and again upto application requirement, if app further down
wants to refer mp->flags for _align/_contig then better update to mp->flags.

But that wasn't the point of discussion, I'm trying to understand that
w/o this patch, whats could be the application level problem?

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC v2 01/17] mempool: fix phys contig check if populate default skipped
  2018-02-01  9:30               ` santosh
@ 2018-02-01 10:00                 ` Andrew Rybchenko
  2018-02-01 10:14                   ` Olivier Matz
  2018-02-01 10:17                   ` santosh
  0 siblings, 2 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-02-01 10:00 UTC (permalink / raw)
  To: santosh, Olivier Matz; +Cc: dev, stable, jerin.jacob

On 02/01/2018 12:30 PM, santosh wrote:
> On Thursday 01 February 2018 02:48 PM, Andrew Rybchenko wrote:
>> On 02/01/2018 12:09 PM, santosh wrote:
>>> On Thursday 01 February 2018 12:24 PM, Andrew Rybchenko wrote:
>>>> On 02/01/2018 08:05 AM, santosh wrote:
>>>>> On Wednesday 31 January 2018 10:15 PM, Olivier Matz wrote:
>>>>>> On Tue, Jan 23, 2018 at 01:15:56PM +0000, Andrew Rybchenko wrote:
>>>>>>> There is not specified dependency between rte_mempool_populate_default()
>>>>>>> and rte_mempool_populate_iova(). So, the second should not rely on the
>>>>>>> fact that the first adds capability flags to the mempool flags.
>>>>>>>
>>>>>>> Fixes: 65cf769f5e6a ("mempool: detect physical contiguous objects")
>>>>>>> Cc: stable@dpdk.org
>>>>>>>
>>>>>>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>>>>>> Looks good to me. I agree it's strange that the mp->flags are
>>>>>> updated with capabilities only in rte_mempool_populate_default().
>>>>>> I see that this behavior is removed later in the patchset since the
>>>>>> get_capa() is removed!
>>>>>>
>>>>>> However maybe this single patch could go in 18.02.
>>>>>> +Santosh +Jerin since it's mostly about Octeon.
>>>>> rte_mempool_xmem_size should return correct size if MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag
>>>>> is set in 'mp->flags'. Thats why _ops_get_capabilities() called in _populate_default() but not
>>>>> at _populate_iova().
>>>>> I think, this 'alone' patch may break octeontx mempool.
>>>> The patch does not touch rte_mempool_populate_default().
>>>> _ops_get_capabilities() is still called there before
>>>> rte_mempool_xmem_size(). The theoretical problem which
>>>> the patch tries to fix is the case when
>>>> rte_mempool_populate_default() is not called at all. I.e. application
>>>> calls _ops_get_capabilities() to get flags, then, together with
>>>> mp->flags, calls rte_mempool_xmem_size() directly, allocates
>>>> calculated amount of memory and calls _populate_iova().
>>>>
>>> In that case, Application does like below:
>>>
>>>      /* Get mempool capabilities */
>>>      mp_flags = 0;
>>>      ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
>>>      if ((ret < 0) && (ret != -ENOTSUP))
>>>          return ret;
>>>
>>>      /* update mempool capabilities */
>>>      mp->flags |= mp_flags;
>> Above line is not mandatory. "mp->flags | mp_flags" could be simply
>> passed to  rte_mempool_xmem_size() below.
>>
> That depends and again upto application requirement, if app further down
> wants to refer mp->flags for _align/_contig then better update to mp->flags.
>
> But that wasn't the point of discussion, I'm trying to understand that
> w/o this patch, whats could be the application level problem?

The problem that it is fragile. If application does not use
rte_mempool_populate_default() it has to care about addition
of mempool capability flags into mempool flags. If it is not done,
rte_mempool_populate_iova/virt/iova_tab() functions will work
incorrectly since F_CAPA_PHYS_CONTIG and
F_CAPA_BLK_ALIGNED_OBJECTS are missing.

The idea of the patch is to make it a bit more robust. I have no
idea how it can break something. If capability flags are already
there - no problem. If no, just make sure that we locally have them.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* [RFC v2 03/17] mempool/octeontx: add callback to calculate memory size
       [not found]     ` <BN3PR07MB2513732462EB5FE5E1B05713E3FA0@BN3PR07MB2513.namprd07.prod.outlook.com>
@ 2018-02-01 10:01       ` santosh
  2018-02-01 13:40         ` santosh
  0 siblings, 1 reply; 197+ messages in thread
From: santosh @ 2018-02-01 10:01 UTC (permalink / raw)
  To: Andrew Rybchenko, dev; +Cc: Olivier MATZ, Jerin Jacob

Hi Andrew,


On Thursday 01 February 2018 11:48 AM, Jacob, Jerin wrote:
> The driver requires one and only one physically contiguous
> memory chunk for all objects.
>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> ---
>   drivers/mempool/octeontx/rte_mempool_octeontx.c | 25 +++++++++++++++++++++++++
>   1 file changed, 25 insertions(+)
>
> diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c 
> b/drivers/mempool/octeontx/rte_mempool_octeontx.c
> index d143d05..4ec5efe 100644
> --- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
> +++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
> @@ -136,6 +136,30 @@ octeontx_fpavf_get_capabilities(const struct rte_mempool *mp,
>           return 0;
>   }
>
> +static ssize_t
> +octeontx_fpavf_calc_mem_size(const struct rte_mempool *mp,
> +                            uint32_t obj_num, uint32_t pg_shift,
> +                            size_t *min_chunk_size, size_t *align)
> +{
> +       ssize_t mem_size;
> +
> +       /*
> +        * Simply need space for one more object to be able to
> +        * fullfil alignment requirements.
> +        */
> +       mem_size = rte_mempool_calc_mem_size_def(mp, obj_num + 1, pg_shift,
> +      

I think, you don't need that (obj_num + 1) as because
rte_xmem_calc_int() will be checking flags for
_ALIGNED + _CAPA_PHYS_CONFIG i.e..

	mask = MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS | MEMPOOL_F_CAPA_PHYS_CONTIG;
	if ((flags & mask) == mask)
		/* alignment need one additional object */
		elt_num += 1;

>                                           min_chunk_size, align);
> +       if (mem_size >= 0) {
> +               /*
> +                * The whole memory area containing the objects must be
> +                * physically contiguous.
> +                */
> +               *min_chunk_size = mem_size;
> +       }
> +
> +       return mem_size;
> +}
> +
>   static int
>   octeontx_fpavf_register_memory_area(const struct rte_mempool *mp,
>                                       char *vaddr, rte_iova_t paddr, size_t len)
> @@ -159,6 +183,7 @@ static struct rte_mempool_ops octeontx_fpavf_ops = {
>           .get_count = octeontx_fpavf_get_count,
>           .get_capabilities = octeontx_fpavf_get_capabilities,
>           .register_memory_area = octeontx_fpavf_register_memory_area,
> +       .calc_mem_size = octeontx_fpavf_calc_mem_size,
>   };
>
>   MEMPOOL_REGISTER_OPS(octeontx_fpavf_ops);
> -- 
> 2.7.4
>

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC v2 01/17] mempool: fix phys contig check if populate default skipped
  2018-02-01 10:00                 ` Andrew Rybchenko
@ 2018-02-01 10:14                   ` Olivier Matz
  2018-02-01 10:33                     ` santosh
  2018-02-01 10:17                   ` santosh
  1 sibling, 1 reply; 197+ messages in thread
From: Olivier Matz @ 2018-02-01 10:14 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: santosh, dev, stable, jerin.jacob

On Thu, Feb 01, 2018 at 01:00:12PM +0300, Andrew Rybchenko wrote:
> On 02/01/2018 12:30 PM, santosh wrote:
> > On Thursday 01 February 2018 02:48 PM, Andrew Rybchenko wrote:
> > > On 02/01/2018 12:09 PM, santosh wrote:
> > > > On Thursday 01 February 2018 12:24 PM, Andrew Rybchenko wrote:
> > > > > On 02/01/2018 08:05 AM, santosh wrote:
> > > > > > On Wednesday 31 January 2018 10:15 PM, Olivier Matz wrote:
> > > > > > > On Tue, Jan 23, 2018 at 01:15:56PM +0000, Andrew Rybchenko wrote:
> > > > > > > > There is not specified dependency between rte_mempool_populate_default()
> > > > > > > > and rte_mempool_populate_iova(). So, the second should not rely on the
> > > > > > > > fact that the first adds capability flags to the mempool flags.
> > > > > > > > 
> > > > > > > > Fixes: 65cf769f5e6a ("mempool: detect physical contiguous objects")
> > > > > > > > Cc: stable@dpdk.org
> > > > > > > > 
> > > > > > > > Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> > > > > > > Looks good to me. I agree it's strange that the mp->flags are
> > > > > > > updated with capabilities only in rte_mempool_populate_default().
> > > > > > > I see that this behavior is removed later in the patchset since the
> > > > > > > get_capa() is removed!
> > > > > > > 
> > > > > > > However maybe this single patch could go in 18.02.
> > > > > > > +Santosh +Jerin since it's mostly about Octeon.
> > > > > > rte_mempool_xmem_size should return correct size if MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag
> > > > > > is set in 'mp->flags'. Thats why _ops_get_capabilities() called in _populate_default() but not
> > > > > > at _populate_iova().
> > > > > > I think, this 'alone' patch may break octeontx mempool.
> > > > > The patch does not touch rte_mempool_populate_default().
> > > > > _ops_get_capabilities() is still called there before
> > > > > rte_mempool_xmem_size(). The theoretical problem which
> > > > > the patch tries to fix is the case when
> > > > > rte_mempool_populate_default() is not called at all. I.e. application
> > > > > calls _ops_get_capabilities() to get flags, then, together with
> > > > > mp->flags, calls rte_mempool_xmem_size() directly, allocates
> > > > > calculated amount of memory and calls _populate_iova().
> > > > > 
> > > > In that case, Application does like below:
> > > > 
> > > >      /* Get mempool capabilities */
> > > >      mp_flags = 0;
> > > >      ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
> > > >      if ((ret < 0) && (ret != -ENOTSUP))
> > > >          return ret;
> > > > 
> > > >      /* update mempool capabilities */
> > > >      mp->flags |= mp_flags;
> > > Above line is not mandatory. "mp->flags | mp_flags" could be simply
> > > passed to  rte_mempool_xmem_size() below.
> > > 
> > That depends and again upto application requirement, if app further down
> > wants to refer mp->flags for _align/_contig then better update to mp->flags.
> > 
> > But that wasn't the point of discussion, I'm trying to understand that
> > w/o this patch, whats could be the application level problem?
> 
> The problem that it is fragile. If application does not use
> rte_mempool_populate_default() it has to care about addition
> of mempool capability flags into mempool flags. If it is not done,
> rte_mempool_populate_iova/virt/iova_tab() functions will work
> incorrectly since F_CAPA_PHYS_CONTIG and
> F_CAPA_BLK_ALIGNED_OBJECTS are missing.
> 
> The idea of the patch is to make it a bit more robust. I have no
> idea how it can break something. If capability flags are already
> there - no problem. If no, just make sure that we locally have them.

The example given by Santosh will work, but it is *not* the role of the
application to update the mempool flags. And nothing says that it is mandatory
to call rte_mempool_ops_get_capabilities() before the populate functions.

For instance, in testpmd it calls rte_mempool_populate_anon() when using
anonymous memory. The capabilities will never be updated in mp->flags.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC v2 01/17] mempool: fix phys contig check if populate default skipped
  2018-02-01 10:00                 ` Andrew Rybchenko
  2018-02-01 10:14                   ` Olivier Matz
@ 2018-02-01 10:17                   ` santosh
  1 sibling, 0 replies; 197+ messages in thread
From: santosh @ 2018-02-01 10:17 UTC (permalink / raw)
  To: Andrew Rybchenko, Olivier Matz; +Cc: dev, stable, jerin.jacob


On Thursday 01 February 2018 03:30 PM, Andrew Rybchenko wrote:
> On 02/01/2018 12:30 PM, santosh wrote:
>> On Thursday 01 February 2018 02:48 PM, Andrew Rybchenko wrote:
>>> On 02/01/2018 12:09 PM, santosh wrote:
>>>> On Thursday 01 February 2018 12:24 PM, Andrew Rybchenko wrote:
>>>>> On 02/01/2018 08:05 AM, santosh wrote:
>>>>>> On Wednesday 31 January 2018 10:15 PM, Olivier Matz wrote:
>>>>>>> On Tue, Jan 23, 2018 at 01:15:56PM +0000, Andrew Rybchenko wrote:
>>>>>>>> There is not specified dependency between rte_mempool_populate_default()
>>>>>>>> and rte_mempool_populate_iova(). So, the second should not rely on the
>>>>>>>> fact that the first adds capability flags to the mempool flags.
>>>>>>>>
>>>>>>>> Fixes: 65cf769f5e6a ("mempool: detect physical contiguous objects")
>>>>>>>> Cc: stable@dpdk.org
>>>>>>>>
>>>>>>>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>>>>>>> Looks good to me. I agree it's strange that the mp->flags are
>>>>>>> updated with capabilities only in rte_mempool_populate_default().
>>>>>>> I see that this behavior is removed later in the patchset since the
>>>>>>> get_capa() is removed!
>>>>>>>
>>>>>>> However maybe this single patch could go in 18.02.
>>>>>>> +Santosh +Jerin since it's mostly about Octeon.
>>>>>> rte_mempool_xmem_size should return correct size if MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag
>>>>>> is set in 'mp->flags'. Thats why _ops_get_capabilities() called in _populate_default() but not
>>>>>> at _populate_iova().
>>>>>> I think, this 'alone' patch may break octeontx mempool.
>>>>> The patch does not touch rte_mempool_populate_default().
>>>>> _ops_get_capabilities() is still called there before
>>>>> rte_mempool_xmem_size(). The theoretical problem which
>>>>> the patch tries to fix is the case when
>>>>> rte_mempool_populate_default() is not called at all. I.e. application
>>>>> calls _ops_get_capabilities() to get flags, then, together with
>>>>> mp->flags, calls rte_mempool_xmem_size() directly, allocates
>>>>> calculated amount of memory and calls _populate_iova().
>>>>>
>>>> In that case, Application does like below:
>>>>
>>>>      /* Get mempool capabilities */
>>>>      mp_flags = 0;
>>>>      ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
>>>>      if ((ret < 0) && (ret != -ENOTSUP))
>>>>          return ret;
>>>>
>>>>      /* update mempool capabilities */
>>>>      mp->flags |= mp_flags;
>>> Above line is not mandatory. "mp->flags | mp_flags" could be simply
>>> passed to  rte_mempool_xmem_size() below.
>>>
>> That depends and again upto application requirement, if app further down
>> wants to refer mp->flags for _align/_contig then better update to mp->flags.
>>
>> But that wasn't the point of discussion, I'm trying to understand that
>> w/o this patch, whats could be the application level problem?
>
> The problem that it is fragile. If application does not use
> rte_mempool_populate_default() it has to care about addition
> of mempool capability flags into mempool flags. If it is not done,

Capability flags should get updated to mempool flags. Or else
_get_ops_capabilities() to update capa flags to mempool flags internally,
I recall that I proposed same in the past.  

[...]

> The idea of the patch is to make it a bit more robust. I have no
> idea how it can break something. If capability flags are already
> there - no problem. If no, just make sure that we locally have them.
>
I would prefer _get_ops_capabilities() updates capa flags to mp->flag for once,
rather than doing (mp->flags | mp_flags) across mempool func.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC v2 01/17] mempool: fix phys contig check if populate default skipped
  2018-02-01 10:14                   ` Olivier Matz
@ 2018-02-01 10:33                     ` santosh
  2018-02-01 14:02                       ` Andrew Rybchenko
  0 siblings, 1 reply; 197+ messages in thread
From: santosh @ 2018-02-01 10:33 UTC (permalink / raw)
  To: Olivier Matz, Andrew Rybchenko; +Cc: dev, stable, jerin.jacob


On Thursday 01 February 2018 03:44 PM, Olivier Matz wrote:
> On Thu, Feb 01, 2018 at 01:00:12PM +0300, Andrew Rybchenko wrote:
>> On 02/01/2018 12:30 PM, santosh wrote:
>>> On Thursday 01 February 2018 02:48 PM, Andrew Rybchenko wrote:
>>>> On 02/01/2018 12:09 PM, santosh wrote:
>>>>> On Thursday 01 February 2018 12:24 PM, Andrew Rybchenko wrote:
>>>>>> On 02/01/2018 08:05 AM, santosh wrote:
>>>>>>> On Wednesday 31 January 2018 10:15 PM, Olivier Matz wrote:
>>>>>>>> On Tue, Jan 23, 2018 at 01:15:56PM +0000, Andrew Rybchenko wrote:
>>>>>>>>> There is not specified dependency between rte_mempool_populate_default()
>>>>>>>>> and rte_mempool_populate_iova(). So, the second should not rely on the
>>>>>>>>> fact that the first adds capability flags to the mempool flags.
>>>>>>>>>
>>>>>>>>> Fixes: 65cf769f5e6a ("mempool: detect physical contiguous objects")
>>>>>>>>> Cc: stable@dpdk.org
>>>>>>>>>
>>>>>>>>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>>>>>>>> Looks good to me. I agree it's strange that the mp->flags are
>>>>>>>> updated with capabilities only in rte_mempool_populate_default().
>>>>>>>> I see that this behavior is removed later in the patchset since the
>>>>>>>> get_capa() is removed!
>>>>>>>>
>>>>>>>> However maybe this single patch could go in 18.02.
>>>>>>>> +Santosh +Jerin since it's mostly about Octeon.
>>>>>>> rte_mempool_xmem_size should return correct size if MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag
>>>>>>> is set in 'mp->flags'. Thats why _ops_get_capabilities() called in _populate_default() but not
>>>>>>> at _populate_iova().
>>>>>>> I think, this 'alone' patch may break octeontx mempool.
>>>>>> The patch does not touch rte_mempool_populate_default().
>>>>>> _ops_get_capabilities() is still called there before
>>>>>> rte_mempool_xmem_size(). The theoretical problem which
>>>>>> the patch tries to fix is the case when
>>>>>> rte_mempool_populate_default() is not called at all. I.e. application
>>>>>> calls _ops_get_capabilities() to get flags, then, together with
>>>>>> mp->flags, calls rte_mempool_xmem_size() directly, allocates
>>>>>> calculated amount of memory and calls _populate_iova().
>>>>>>
>>>>> In that case, Application does like below:
>>>>>
>>>>>      /* Get mempool capabilities */
>>>>>      mp_flags = 0;
>>>>>      ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
>>>>>      if ((ret < 0) && (ret != -ENOTSUP))
>>>>>          return ret;
>>>>>
>>>>>      /* update mempool capabilities */
>>>>>      mp->flags |= mp_flags;
>>>> Above line is not mandatory. "mp->flags | mp_flags" could be simply
>>>> passed to  rte_mempool_xmem_size() below.
>>>>
>>> That depends and again upto application requirement, if app further down
>>> wants to refer mp->flags for _align/_contig then better update to mp->flags.
>>>
>>> But that wasn't the point of discussion, I'm trying to understand that
>>> w/o this patch, whats could be the application level problem?
>> The problem that it is fragile. If application does not use
>> rte_mempool_populate_default() it has to care about addition
>> of mempool capability flags into mempool flags. If it is not done,
>> rte_mempool_populate_iova/virt/iova_tab() functions will work
>> incorrectly since F_CAPA_PHYS_CONTIG and
>> F_CAPA_BLK_ALIGNED_OBJECTS are missing.
>>
>> The idea of the patch is to make it a bit more robust. I have no
>> idea how it can break something. If capability flags are already
>> there - no problem. If no, just make sure that we locally have them.
> The example given by Santosh will work, but it is *not* the role of the
> application to update the mempool flags. And nothing says that it is mandatory
> to call rte_mempool_ops_get_capabilities() before the populate functions.
>
> For instance, in testpmd it calls rte_mempool_populate_anon() when using
> anonymous memory. The capabilities will never be updated in mp->flags.

Valid case and I agree with your example and explanation.
With nits change:
mp->flags |= mp_capa_flags;

Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC v2 03/17] mempool/octeontx: add callback to calculate memory size
  2018-02-01 10:01       ` santosh
@ 2018-02-01 13:40         ` santosh
  2018-03-10 15:49           ` Andrew Rybchenko
  0 siblings, 1 reply; 197+ messages in thread
From: santosh @ 2018-02-01 13:40 UTC (permalink / raw)
  To: Andrew Rybchenko, dev; +Cc: Olivier MATZ, Jerin Jacob


On Thursday 01 February 2018 03:31 PM, santosh wrote:
> Hi Andrew,
>
>
> On Thursday 01 February 2018 11:48 AM, Jacob, Jerin wrote:
>> The driver requires one and only one physically contiguous
>> memory chunk for all objects.
>>
>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>> ---
>>   drivers/mempool/octeontx/rte_mempool_octeontx.c | 25 +++++++++++++++++++++++++
>>   1 file changed, 25 insertions(+)
>>
>> diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c 
>> b/drivers/mempool/octeontx/rte_mempool_octeontx.c
>> index d143d05..4ec5efe 100644
>> --- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
>> +++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
>> @@ -136,6 +136,30 @@ octeontx_fpavf_get_capabilities(const struct rte_mempool *mp,
>>           return 0;
>>   }
>>
>> +static ssize_t
>> +octeontx_fpavf_calc_mem_size(const struct rte_mempool *mp,
>> +                            uint32_t obj_num, uint32_t pg_shift,
>> +                            size_t *min_chunk_size, size_t *align)
>> +{
>> +       ssize_t mem_size;
>> +
>> +       /*
>> +        * Simply need space for one more object to be able to
>> +        * fullfil alignment requirements.
>> +        */
>> +       mem_size = rte_mempool_calc_mem_size_def(mp, obj_num + 1, pg_shift,
>> +      
> I think, you don't need that (obj_num + 1) as because
> rte_xmem_calc_int() will be checking flags for
> _ALIGNED + _CAPA_PHYS_CONFIG i.e..
>
> 	mask = MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS | MEMPOOL_F_CAPA_PHYS_CONTIG;
> 	if ((flags & mask) == mask)
> 		/* alignment need one additional object */
> 		elt_num += 1;

ok, You are removing above check in v2- 06/17, so ignore above comment.
I suggest to move this patch and keep it after 06/17. Or perhaps keep
common mempool changes first then followed by driver specifics changes in your
v3 series.

Thanks.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC v2 01/17] mempool: fix phys contig check if populate default skipped
  2018-02-01 10:33                     ` santosh
@ 2018-02-01 14:02                       ` Andrew Rybchenko
  0 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-02-01 14:02 UTC (permalink / raw)
  To: santosh, Olivier Matz; +Cc: dev, stable, jerin.jacob

On 02/01/2018 01:33 PM, santosh wrote:
> On Thursday 01 February 2018 03:44 PM, Olivier Matz wrote:
>> On Thu, Feb 01, 2018 at 01:00:12PM +0300, Andrew Rybchenko wrote:
>>> On 02/01/2018 12:30 PM, santosh wrote:
>>>> On Thursday 01 February 2018 02:48 PM, Andrew Rybchenko wrote:
>>>>> On 02/01/2018 12:09 PM, santosh wrote:
>>>>>> On Thursday 01 February 2018 12:24 PM, Andrew Rybchenko wrote:
>>>>>>> On 02/01/2018 08:05 AM, santosh wrote:
>>>>>>>> On Wednesday 31 January 2018 10:15 PM, Olivier Matz wrote:
>>>>>>>>> On Tue, Jan 23, 2018 at 01:15:56PM +0000, Andrew Rybchenko wrote:
>>>>>>>>>> There is not specified dependency between rte_mempool_populate_default()
>>>>>>>>>> and rte_mempool_populate_iova(). So, the second should not rely on the
>>>>>>>>>> fact that the first adds capability flags to the mempool flags.
>>>>>>>>>>
>>>>>>>>>> Fixes: 65cf769f5e6a ("mempool: detect physical contiguous objects")
>>>>>>>>>> Cc: stable@dpdk.org
>>>>>>>>>>
>>>>>>>>>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>>>>>>>>> Looks good to me. I agree it's strange that the mp->flags are
>>>>>>>>> updated with capabilities only in rte_mempool_populate_default().
>>>>>>>>> I see that this behavior is removed later in the patchset since the
>>>>>>>>> get_capa() is removed!
>>>>>>>>>
>>>>>>>>> However maybe this single patch could go in 18.02.
>>>>>>>>> +Santosh +Jerin since it's mostly about Octeon.
>>>>>>>> rte_mempool_xmem_size should return correct size if MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag
>>>>>>>> is set in 'mp->flags'. Thats why _ops_get_capabilities() called in _populate_default() but not
>>>>>>>> at _populate_iova().
>>>>>>>> I think, this 'alone' patch may break octeontx mempool.
>>>>>>> The patch does not touch rte_mempool_populate_default().
>>>>>>> _ops_get_capabilities() is still called there before
>>>>>>> rte_mempool_xmem_size(). The theoretical problem which
>>>>>>> the patch tries to fix is the case when
>>>>>>> rte_mempool_populate_default() is not called at all. I.e. application
>>>>>>> calls _ops_get_capabilities() to get flags, then, together with
>>>>>>> mp->flags, calls rte_mempool_xmem_size() directly, allocates
>>>>>>> calculated amount of memory and calls _populate_iova().
>>>>>>>
>>>>>> In that case, Application does like below:
>>>>>>
>>>>>>       /* Get mempool capabilities */
>>>>>>       mp_flags = 0;
>>>>>>       ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
>>>>>>       if ((ret < 0) && (ret != -ENOTSUP))
>>>>>>           return ret;
>>>>>>
>>>>>>       /* update mempool capabilities */
>>>>>>       mp->flags |= mp_flags;
>>>>> Above line is not mandatory. "mp->flags | mp_flags" could be simply
>>>>> passed to  rte_mempool_xmem_size() below.
>>>>>
>>>> That depends and again upto application requirement, if app further down
>>>> wants to refer mp->flags for _align/_contig then better update to mp->flags.
>>>>
>>>> But that wasn't the point of discussion, I'm trying to understand that
>>>> w/o this patch, whats could be the application level problem?
>>> The problem that it is fragile. If application does not use
>>> rte_mempool_populate_default() it has to care about addition
>>> of mempool capability flags into mempool flags. If it is not done,
>>> rte_mempool_populate_iova/virt/iova_tab() functions will work
>>> incorrectly since F_CAPA_PHYS_CONTIG and
>>> F_CAPA_BLK_ALIGNED_OBJECTS are missing.
>>>
>>> The idea of the patch is to make it a bit more robust. I have no
>>> idea how it can break something. If capability flags are already
>>> there - no problem. If no, just make sure that we locally have them.
>> The example given by Santosh will work, but it is *not* the role of the
>> application to update the mempool flags. And nothing says that it is mandatory
>> to call rte_mempool_ops_get_capabilities() before the populate functions.
>>
>> For instance, in testpmd it calls rte_mempool_populate_anon() when using
>> anonymous memory. The capabilities will never be updated in mp->flags.
> Valid case and I agree with your example and explanation.
> With nits change:
> mp->flags |= mp_capa_flags;
>
> Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>

I'll submit the patch separately with this minor change. Thanks.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* [PATCH] mempool: fix phys contig check if populate default skipped
  2018-01-23 13:15   ` [RFC v2 01/17] mempool: fix phys contig check if populate default skipped Andrew Rybchenko
  2018-01-31 16:45     ` Olivier Matz
@ 2018-02-01 14:02     ` Andrew Rybchenko
  2018-02-05 23:53       ` [dpdk-stable] " Thomas Monjalon
  1 sibling, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-02-01 14:02 UTC (permalink / raw)
  To: dev; +Cc: Olivier Matz, Santosh Shukla, stable

There is not specified dependency between rte_mempool_populate_default()
and rte_mempool_populate_iova(). So, the second should not rely on the
fact that the first adds capability flags to the mempool flags.

Fixes: 65cf769f5e6a ("mempool: detect physical contiguous objects")
Cc: stable@dpdk.org

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
---
 lib/librte_mempool/rte_mempool.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 6fdb723..54f7f4b 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -333,6 +333,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	void *opaque)
 {
 	unsigned total_elt_sz;
+	unsigned int mp_capa_flags;
 	unsigned i = 0;
 	size_t off;
 	struct rte_mempool_memhdr *memhdr;
@@ -357,8 +358,17 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 
+	/* Get mempool capabilities */
+	mp_capa_flags = 0;
+	ret = rte_mempool_ops_get_capabilities(mp, &mp_capa_flags);
+	if ((ret < 0) && (ret != -ENOTSUP))
+		return ret;
+
+	/* update mempool capabilities */
+	mp->flags |= mp_capa_flags;
+
 	/* Detect pool area has sufficient space for elements */
-	if (mp->flags & MEMPOOL_F_CAPA_PHYS_CONTIG) {
+	if (mp_capa_flags & MEMPOOL_F_CAPA_PHYS_CONTIG) {
 		if (len < total_elt_sz * mp->size) {
 			RTE_LOG(ERR, MEMPOOL,
 				"pool area %" PRIx64 " not enough\n",
@@ -378,7 +388,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	memhdr->free_cb = free_cb;
 	memhdr->opaque = opaque;
 
-	if (mp->flags & MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS)
+	if (mp_capa_flags & MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS)
 		/* align object start address to a multiple of total_elt_sz */
 		off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
 	else if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* Re: [dpdk-stable] [PATCH] mempool: fix phys contig check if populate default skipped
  2018-02-01 14:02     ` [PATCH] " Andrew Rybchenko
@ 2018-02-05 23:53       ` Thomas Monjalon
  0 siblings, 0 replies; 197+ messages in thread
From: Thomas Monjalon @ 2018-02-05 23:53 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: stable, dev, Olivier Matz, Santosh Shukla

01/02/2018 15:02, Andrew Rybchenko:
> There is not specified dependency between rte_mempool_populate_default()
> and rte_mempool_populate_iova(). So, the second should not rely on the
> fact that the first adds capability flags to the mempool flags.
> 
> Fixes: 65cf769f5e6a ("mempool: detect physical contiguous objects")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>

Applied, thanks

^ permalink raw reply	[flat|nested] 197+ messages in thread

* [PATCH v1 0/9] mempool: prepare to add bucket driver
  2018-01-23 13:15 ` [RFC v2 00/17] " Andrew Rybchenko
                     ` (17 preceding siblings ...)
  2018-01-31 16:44   ` [RFC v2 00/17] mempool: add bucket mempool driver Olivier Matz
@ 2018-03-10 15:39   ` Andrew Rybchenko
  2018-03-10 15:39     ` [PATCH v1 1/9] mempool: add op to calculate memory size to be allocated Andrew Rybchenko
                       ` (10 more replies)
  2018-03-25 16:20   ` [PATCH v2 00/11] " Andrew Rybchenko
                     ` (2 subsequent siblings)
  21 siblings, 11 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-10 15:39 UTC (permalink / raw)
  To: dev
  Cc: Olivier MATZ, Santosh Shukla, Jerin Jacob, Hemant Agrawal,
	Shreyansh Jain

The initial patch series [1] is split into two to simplify processing.
The second series relies on this one and will add bucket mempool driver
and related ops.

The patch series has generic enhancements suggested by Olivier.
Basically it adds driver callbacks to calculate required memory size and
to populate objects using provided memory area. It allows to remove
so-called capability flags used before to tell generic code how to
allocate and slice allocated memory into mempool objects.
Clean up which removes get_capabilities and register_memory_area is
not strictly required, but I think right thing to do.
Existing mempool drivers are updated.

I've kept rte_mempool_populate_iova_tab() intact since it seems to
be not directly related XMEM API functions.

It breaks ABI since changes rte_mempool_ops. Also it removes
rte_mempool_ops_register_memory_area() and
rte_mempool_ops_get_capabilities() since corresponding callbacks are
removed.

Internal global functions are not listed in map file since it is not
a part of external API.

[1] http://dpdk.org/ml/archives/dev/2018-January/088698.html

RFCv1 -> RFCv2:
  - add driver ops to calculate required memory size and populate
    mempool objects, remove extra flags which were required before
    to control it
  - transition of octeontx and dpaa drivers to the new callbacks
  - change info API to get information from driver required to
    API user to know contiguous block size
  - remove get_capabilities (not required any more and may be
    substituted with more in info get API)
  - remove register_memory_area since it is substituted with
    populate callback which can do more
  - use SPDX tags
  - avoid all objects affinity to single lcore
  - fix bucket get_count
  - deprecate XMEM API
  - avoid introduction of a new function to flush cache
  - fix NO_CACHE_ALIGN case in bucket mempool

RFCv2 -> v1:
  - split the series in two
  - squash octeontx patches which implement calc_mem_size and populate
    callbacks into the patch which removes get_capabilities since it is
    the easiest way to untangle the tangle of tightly related library
    functions and flags advertised by the driver
  - consistently name default callbacks
  - move default callbacks to dedicated file
  - see detailed description in patches

Andrew Rybchenko (7):
  mempool: add op to calculate memory size to be allocated
  mempool: add op to populate objects using provided memory
  mempool: remove callback to get capabilities
  mempool: deprecate xmem functions
  mempool/octeontx: prepare to remove register memory area op
  mempool/dpaa: prepare to remove register memory area op
  mempool: remove callback to register memory area

Artem V. Andreev (2):
  mempool: ensure the mempool is initialized before populating
  mempool: support flushing the default cache of the mempool

 doc/guides/rel_notes/deprecation.rst            |  12 +-
 doc/guides/rel_notes/release_18_05.rst          |  32 ++-
 drivers/mempool/dpaa/dpaa_mempool.c             |  13 +-
 drivers/mempool/octeontx/rte_mempool_octeontx.c |  64 ++++--
 lib/librte_mempool/Makefile                     |   3 +-
 lib/librte_mempool/meson.build                  |   5 +-
 lib/librte_mempool/rte_mempool.c                | 159 +++++++--------
 lib/librte_mempool/rte_mempool.h                | 260 +++++++++++++++++-------
 lib/librte_mempool/rte_mempool_ops.c            |  37 ++--
 lib/librte_mempool/rte_mempool_ops_default.c    |  51 +++++
 lib/librte_mempool/rte_mempool_version.map      |  11 +-
 test/test/test_mempool.c                        |  31 ---
 12 files changed, 437 insertions(+), 241 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_ops_default.c

-- 
2.7.4

^ permalink raw reply	[flat|nested] 197+ messages in thread

* [PATCH v1 1/9] mempool: add op to calculate memory size to be allocated
  2018-03-10 15:39   ` [PATCH v1 0/9] mempool: prepare to add bucket driver Andrew Rybchenko
@ 2018-03-10 15:39     ` Andrew Rybchenko
  2018-03-11 12:51       ` santosh
  2018-03-19 17:03       ` Olivier Matz
  2018-03-10 15:39     ` [PATCH v1 2/9] mempool: add op to populate objects using provided memory Andrew Rybchenko
                       ` (9 subsequent siblings)
  10 siblings, 2 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-10 15:39 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ

Size of memory chunk required to populate mempool objects depends
on how objects are stored in the memory. Different mempool drivers
may have different requirements and a new operation allows to
calculate memory size in accordance with driver requirements and
advertise requirements on minimum memory chunk size and alignment
in a generic way.

Bump ABI version since the patch breaks it.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
RFCv2 -> v1:
 - move default calc_mem_size callback to rte_mempool_ops_default.c
 - add ABI changes to release notes
 - name default callback consistently: rte_mempool_op_<callback>_default()
 - bump ABI version since it is the first patch which breaks ABI
 - describe default callback behaviour in details
 - avoid introduction of internal function to cope with depration
   (keep it to deprecation patch)
 - move cache-line or page boundary chunk alignment to default callback
 - highlight that min_chunk_size and align parameters are output only

 doc/guides/rel_notes/deprecation.rst         |  3 +-
 doc/guides/rel_notes/release_18_05.rst       |  7 ++-
 lib/librte_mempool/Makefile                  |  3 +-
 lib/librte_mempool/meson.build               |  5 +-
 lib/librte_mempool/rte_mempool.c             | 43 +++++++--------
 lib/librte_mempool/rte_mempool.h             | 80 +++++++++++++++++++++++++++-
 lib/librte_mempool/rte_mempool_ops.c         | 18 +++++++
 lib/librte_mempool/rte_mempool_ops_default.c | 38 +++++++++++++
 lib/librte_mempool/rte_mempool_version.map   |  8 +++
 9 files changed, 177 insertions(+), 28 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_ops_default.c

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 6594585..e02d4ca 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -72,8 +72,7 @@ Deprecation Notices
 
   - removal of ``get_capabilities`` mempool ops and related flags.
   - substitute ``register_memory_area`` with ``populate`` ops.
-  - addition of new ops to customize required memory chunk calculation,
-    customize objects population and allocate contiguous
+  - addition of new ops to customize objects population and allocate contiguous
     block of objects if underlying driver supports it.
 
 * mbuf: The control mbuf API will be removed in v18.05. The impacted
diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index f2525bb..59583ea 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -80,6 +80,11 @@ ABI Changes
    Also, make sure to start the actual text at the margin.
    =========================================================
 
+* **Changed rte_mempool_ops structure.**
+
+  A new callback ``calc_mem_size`` has been added to ``rte_mempool_ops``
+  to allow to customize required memory size calculation.
+
 
 Removed Items
 -------------
@@ -152,7 +157,7 @@ The libraries prepended with a plus sign were incremented in this version.
      librte_latencystats.so.1
      librte_lpm.so.2
      librte_mbuf.so.3
-     librte_mempool.so.3
+   + librte_mempool.so.4
    + librte_meter.so.2
      librte_metrics.so.1
      librte_net.so.1
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 24e735a..072740f 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -11,11 +11,12 @@ LDLIBS += -lrte_eal -lrte_ring
 
 EXPORT_MAP := rte_mempool_version.map
 
-LIBABIVER := 3
+LIBABIVER := 4
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops_default.c
 # install includes
 SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
 
diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build
index 7a4f3da..9e3b527 100644
--- a/lib/librte_mempool/meson.build
+++ b/lib/librte_mempool/meson.build
@@ -1,7 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2017 Intel Corporation
 
-version = 2
-sources = files('rte_mempool.c', 'rte_mempool_ops.c')
+version = 4
+sources = files('rte_mempool.c', 'rte_mempool_ops.c',
+		'rte_mempool_ops_default.c')
 headers = files('rte_mempool.h')
 deps += ['ring']
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 54f7f4b..3bfb36e 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -544,39 +544,33 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	unsigned int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	const struct rte_memzone *mz;
-	size_t size, total_elt_sz, align, pg_sz, pg_shift;
+	ssize_t mem_size;
+	size_t align, pg_sz, pg_shift;
 	rte_iova_t iova;
 	unsigned mz_id, n;
-	unsigned int mp_flags;
 	int ret;
 
 	/* mempool must not be populated */
 	if (mp->nb_mem_chunks != 0)
 		return -EEXIST;
 
-	/* Get mempool capabilities */
-	mp_flags = 0;
-	ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
-	if ((ret < 0) && (ret != -ENOTSUP))
-		return ret;
-
-	/* update mempool capabilities */
-	mp->flags |= mp_flags;
-
 	if (rte_eal_has_hugepages()) {
 		pg_shift = 0; /* not needed, zone is physically contiguous */
 		pg_sz = 0;
-		align = RTE_CACHE_LINE_SIZE;
 	} else {
 		pg_sz = getpagesize();
 		pg_shift = rte_bsf32(pg_sz);
-		align = pg_sz;
 	}
 
-	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 	for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) {
-		size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift,
-						mp->flags);
+		size_t min_chunk_size;
+
+		mem_size = rte_mempool_ops_calc_mem_size(mp, n, pg_shift,
+				&min_chunk_size, &align);
+		if (mem_size < 0) {
+			ret = mem_size;
+			goto fail;
+		}
 
 		ret = snprintf(mz_name, sizeof(mz_name),
 			RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id);
@@ -585,7 +579,7 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 			goto fail;
 		}
 
-		mz = rte_memzone_reserve_aligned(mz_name, size,
+		mz = rte_memzone_reserve_aligned(mz_name, mem_size,
 			mp->socket_id, mz_flags, align);
 		/* not enough memory, retry with the biggest zone we have */
 		if (mz == NULL)
@@ -596,6 +590,12 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 			goto fail;
 		}
 
+		if (mz->len < min_chunk_size) {
+			rte_memzone_free(mz);
+			ret = -ENOMEM;
+			goto fail;
+		}
+
 		if (mp->flags & MEMPOOL_F_NO_PHYS_CONTIG)
 			iova = RTE_BAD_IOVA;
 		else
@@ -628,13 +628,14 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 static size_t
 get_anon_size(const struct rte_mempool *mp)
 {
-	size_t size, total_elt_sz, pg_sz, pg_shift;
+	size_t size, pg_sz, pg_shift;
+	size_t min_chunk_size;
+	size_t align;
 
 	pg_sz = getpagesize();
 	pg_shift = rte_bsf32(pg_sz);
-	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
-	size = rte_mempool_xmem_size(mp->size, total_elt_sz, pg_shift,
-					mp->flags);
+	size = rte_mempool_ops_calc_mem_size(mp, mp->size, pg_shift,
+					     &min_chunk_size, &align);
 
 	return size;
 }
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 8b1b7f7..0151f6c 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -399,6 +399,56 @@ typedef int (*rte_mempool_get_capabilities_t)(const struct rte_mempool *mp,
 typedef int (*rte_mempool_ops_register_memory_area_t)
 (const struct rte_mempool *mp, char *vaddr, rte_iova_t iova, size_t len);
 
+/**
+ * Calculate memory size required to store given number of objects.
+ *
+ * @param[in] mp
+ *   Pointer to the memory pool.
+ * @param[in] obj_num
+ *   Number of objects.
+ * @param[in] pg_shift
+ *   LOG2 of the physical pages size. If set to 0, ignore page boundaries.
+ * @param[out] min_chunk_size
+ *   Location for minimum size of the memory chunk which may be used to
+ *   store memory pool objects.
+ * @param[out] align
+ *   Location with required memory chunk alignment.
+ * @return
+ *   Required memory size aligned at page boundary.
+ */
+typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp,
+		uint32_t obj_num,  uint32_t pg_shift,
+		size_t *min_chunk_size, size_t *align);
+
+/**
+ * Default way to calculate memory size required to store given number of
+ * objects.
+ *
+ * If page boundaries may be ignored, it is just a product of total
+ * object size including header and trailer and number of objects.
+ * Otherwise, it is a number of pages required to store given number of
+ * objects without crossing page boundary.
+ *
+ * Note that if object size is bigger than page size, then it assumes
+ * that pages are grouped in subsets of physically continuous pages big
+ * enough to store at least one object.
+ *
+ * If mempool driver requires object addresses to be block size aligned
+ * (MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS), space for one extra element is
+ * reserved to be able to meet the requirement.
+ *
+ * Minimum size of memory chunk is either all required space, if
+ * capabilities say that whole memory area must be physically contiguous
+ * (MEMPOOL_F_CAPA_PHYS_CONTIG), or a maximum of the page size and total
+ * element size.
+ *
+ * Required memory chunk alignment is a maximum of page size and cache
+ * line size.
+ */
+ssize_t rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
+		uint32_t obj_num, uint32_t pg_shift,
+		size_t *min_chunk_size, size_t *align);
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -415,6 +465,11 @@ struct rte_mempool_ops {
 	 * Notify new memory area to mempool
 	 */
 	rte_mempool_ops_register_memory_area_t register_memory_area;
+	/**
+	 * Optional callback to calculate memory size required to
+	 * store specified number of objects.
+	 */
+	rte_mempool_calc_mem_size_t calc_mem_size;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -564,6 +619,29 @@ rte_mempool_ops_register_memory_area(const struct rte_mempool *mp,
 				char *vaddr, rte_iova_t iova, size_t len);
 
 /**
+ * @internal wrapper for mempool_ops calc_mem_size callback.
+ * API to calculate size of memory required to store specified number of
+ * object.
+ *
+ * @param[in] mp
+ *   Pointer to the memory pool.
+ * @param[in] obj_num
+ *   Number of objects.
+ * @param[in] pg_shift
+ *   LOG2 of the physical pages size. If set to 0, ignore page boundaries.
+ * @param[out] min_chunk_size
+ *   Location for minimum size of the memory chunk which may be used to
+ *   store memory pool objects.
+ * @param[out] align
+ *   Location with required memory chunk alignment.
+ * @return
+ *   Required memory size aligned at page boundary.
+ */
+ssize_t rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
+				      uint32_t obj_num, uint32_t pg_shift,
+				      size_t *min_chunk_size, size_t *align);
+
+/**
  * @internal wrapper for mempool_ops free callback.
  *
  * @param mp
@@ -1533,7 +1611,7 @@ uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  * of objects. Assume that the memory buffer will be aligned at page
  * boundary.
  *
- * Note that if object size is bigger then page size, then it assumes
+ * Note that if object size is bigger than page size, then it assumes
  * that pages are grouped in subsets of physically continuous pages big
  * enough to store at least one object.
  *
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 0732255..26908cc 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -59,6 +59,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->get_count = h->get_count;
 	ops->get_capabilities = h->get_capabilities;
 	ops->register_memory_area = h->register_memory_area;
+	ops->calc_mem_size = h->calc_mem_size;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -123,6 +124,23 @@ rte_mempool_ops_register_memory_area(const struct rte_mempool *mp, char *vaddr,
 	return ops->register_memory_area(mp, vaddr, iova, len);
 }
 
+/* wrapper to notify new memory area to external mempool */
+ssize_t
+rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
+				uint32_t obj_num, uint32_t pg_shift,
+				size_t *min_chunk_size, size_t *align)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+
+	if (ops->calc_mem_size == NULL)
+		return rte_mempool_op_calc_mem_size_default(mp, obj_num,
+				pg_shift, min_chunk_size, align);
+
+	return ops->calc_mem_size(mp, obj_num, pg_shift, min_chunk_size, align);
+}
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c
new file mode 100644
index 0000000..57fe79b
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_ops_default.c
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016 Intel Corporation.
+ * Copyright(c) 2016 6WIND S.A.
+ * Copyright(c) 2018 Solarflare Communications Inc.
+ */
+
+#include <rte_mempool.h>
+
+ssize_t
+rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
+				     uint32_t obj_num, uint32_t pg_shift,
+				     size_t *min_chunk_size, size_t *align)
+{
+	unsigned int mp_flags;
+	int ret;
+	size_t total_elt_sz;
+	size_t mem_size;
+
+	/* Get mempool capabilities */
+	mp_flags = 0;
+	ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
+	if ((ret < 0) && (ret != -ENOTSUP))
+		return ret;
+
+	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
+
+	mem_size = rte_mempool_xmem_size(obj_num, total_elt_sz, pg_shift,
+					 mp->flags | mp_flags);
+
+	if (mp_flags & MEMPOOL_F_CAPA_PHYS_CONTIG)
+		*min_chunk_size = mem_size;
+	else
+		*min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz);
+
+	*align = RTE_MAX((size_t)RTE_CACHE_LINE_SIZE, (size_t)1 << pg_shift);
+
+	return mem_size;
+}
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 62b76f9..e2a054b 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -51,3 +51,11 @@ DPDK_17.11 {
 	rte_mempool_populate_iova_tab;
 
 } DPDK_16.07;
+
+DPDK_18.05 {
+	global:
+
+	rte_mempool_op_calc_mem_size_default;
+
+} DPDK_17.11;
+
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v1 2/9] mempool: add op to populate objects using provided memory
  2018-03-10 15:39   ` [PATCH v1 0/9] mempool: prepare to add bucket driver Andrew Rybchenko
  2018-03-10 15:39     ` [PATCH v1 1/9] mempool: add op to calculate memory size to be allocated Andrew Rybchenko
@ 2018-03-10 15:39     ` Andrew Rybchenko
  2018-03-19 17:04       ` Olivier Matz
  2018-03-10 15:39     ` [PATCH v1 3/9] mempool: remove callback to get capabilities Andrew Rybchenko
                       ` (8 subsequent siblings)
  10 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-10 15:39 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ

The callback allows to customize how objects are stored in the
memory chunk. Default implementation of the callback which simply
puts objects one by one is available.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
RFCv2 -> v1:
 - advertise ABI changes in release notes
 - use consistent name for default callback:
   rte_mempool_op_<callback>_default()
 - add opaque data pointer to populated object callback
 - move default callback to dedicated file

 doc/guides/rel_notes/deprecation.rst         |  2 +-
 doc/guides/rel_notes/release_18_05.rst       |  2 +
 lib/librte_mempool/rte_mempool.c             | 23 +++----
 lib/librte_mempool/rte_mempool.h             | 90 ++++++++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c         | 21 +++++++
 lib/librte_mempool/rte_mempool_ops_default.c | 24 ++++++++
 lib/librte_mempool/rte_mempool_version.map   |  1 +
 7 files changed, 148 insertions(+), 15 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index e02d4ca..c06fc67 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -72,7 +72,7 @@ Deprecation Notices
 
   - removal of ``get_capabilities`` mempool ops and related flags.
   - substitute ``register_memory_area`` with ``populate`` ops.
-  - addition of new ops to customize objects population and allocate contiguous
+  - addition of new op to allocate contiguous
     block of objects if underlying driver supports it.
 
 * mbuf: The control mbuf API will be removed in v18.05. The impacted
diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index 59583ea..abaefe5 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -84,6 +84,8 @@ ABI Changes
 
   A new callback ``calc_mem_size`` has been added to ``rte_mempool_ops``
   to allow to customize required memory size calculation.
+  A new callback ``populate`` has been added to ``rte_mempool_ops``
+  to allow to customize objects population.
 
 
 Removed Items
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 3bfb36e..ed0e982 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -99,7 +99,8 @@ static unsigned optimize_object_size(unsigned obj_size)
 }
 
 static void
-mempool_add_elem(struct rte_mempool *mp, void *obj, rte_iova_t iova)
+mempool_add_elem(struct rte_mempool *mp, __rte_unused void *opaque,
+		 void *obj, rte_iova_t iova)
 {
 	struct rte_mempool_objhdr *hdr;
 	struct rte_mempool_objtlr *tlr __rte_unused;
@@ -116,9 +117,6 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, rte_iova_t iova)
 	tlr = __mempool_get_trailer(obj);
 	tlr->cookie = RTE_MEMPOOL_TRAILER_COOKIE;
 #endif
-
-	/* enqueue in ring */
-	rte_mempool_ops_enqueue_bulk(mp, &obj, 1);
 }
 
 /* call obj_cb() for each mempool element */
@@ -396,16 +394,13 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	else
 		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
 
-	while (off + total_elt_sz <= len && mp->populated_size < mp->size) {
-		off += mp->header_size;
-		if (iova == RTE_BAD_IOVA)
-			mempool_add_elem(mp, (char *)vaddr + off,
-				RTE_BAD_IOVA);
-		else
-			mempool_add_elem(mp, (char *)vaddr + off, iova + off);
-		off += mp->elt_size + mp->trailer_size;
-		i++;
-	}
+	if (off > len)
+		return -EINVAL;
+
+	i = rte_mempool_ops_populate(mp, mp->size - mp->populated_size,
+		(char *)vaddr + off,
+		(iova == RTE_BAD_IOVA) ? RTE_BAD_IOVA : (iova + off),
+		len - off, mempool_add_elem, NULL);
 
 	/* not enough room to store one object */
 	if (i == 0)
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 0151f6c..49083bd 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -449,6 +449,63 @@ ssize_t rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
 		uint32_t obj_num, uint32_t pg_shift,
 		size_t *min_chunk_size, size_t *align);
 
+/**
+ * Function to be called for each populated object.
+ *
+ * @param[in] mp
+ *   A pointer to the mempool structure.
+ * @param[in] opaque
+ *   An opaque pointer passed to iterator.
+ * @param[in] vaddr
+ *   Object virtual address.
+ * @param[in] iova
+ *   Input/output virtual address of the object or RTE_BAD_IOVA.
+ */
+typedef void (rte_mempool_populate_obj_cb_t)(struct rte_mempool *mp,
+		void *opaque, void *vaddr, rte_iova_t iova);
+
+/**
+ * Populate memory pool objects using provided memory chunk.
+ *
+ * Populated objects should be enqueued to the pool, e.g. using
+ * rte_mempool_ops_enqueue_bulk().
+ *
+ * If the given IO address is unknown (iova = RTE_BAD_IOVA),
+ * the chunk doesn't need to be physically contiguous (only virtually),
+ * and allocated objects may span two pages.
+ *
+ * @param[in] mp
+ *   A pointer to the mempool structure.
+ * @param[in] max_objs
+ *   Maximum number of objects to be populated.
+ * @param[in] vaddr
+ *   The virtual address of memory that should be used to store objects.
+ * @param[in] iova
+ *   The IO address
+ * @param[in] len
+ *   The length of memory in bytes.
+ * @param[in] obj_cb
+ *   Callback function to be executed for each populated object.
+ * @param[in] obj_cb_arg
+ *   An opaque pointer passed to the callback function.
+ * @return
+ *   The number of objects added on success.
+ *   On error, no objects are populated and a negative errno is returned.
+ */
+typedef int (*rte_mempool_populate_t)(struct rte_mempool *mp,
+		unsigned int max_objs,
+		void *vaddr, rte_iova_t iova, size_t len,
+		rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg);
+
+/**
+ * Default way to populate memory pool object using provided memory
+ * chunk: just slice objects one by one.
+ */
+int rte_mempool_op_populate_default(struct rte_mempool *mp,
+		unsigned int max_objs,
+		void *vaddr, rte_iova_t iova, size_t len,
+		rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg);
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -470,6 +527,11 @@ struct rte_mempool_ops {
 	 * store specified number of objects.
 	 */
 	rte_mempool_calc_mem_size_t calc_mem_size;
+	/**
+	 * Optional callback to populate mempool objects using
+	 * provided memory chunk.
+	 */
+	rte_mempool_populate_t populate;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -642,6 +704,34 @@ ssize_t rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
 				      size_t *min_chunk_size, size_t *align);
 
 /**
+ * @internal wrapper for mempool_ops populate callback.
+ *
+ * Populate memory pool objects using provided memory chunk.
+ *
+ * @param[in] mp
+ *   A pointer to the mempool structure.
+ * @param[in] max_objs
+ *   Maximum number of objects to be populated.
+ * @param[in] vaddr
+ *   The virtual address of memory that should be used to store objects.
+ * @param[in] iova
+ *   The IO address
+ * @param[in] len
+ *   The length of memory in bytes.
+ * @param[in] obj_cb
+ *   Callback function to be executed for each populated object.
+ * @param[in] obj_cb_arg
+ *   An opaque pointer passed to the callback function.
+ * @return
+ *   The number of objects added on success.
+ *   On error, no objects are populated and a negative errno is returned.
+ */
+int rte_mempool_ops_populate(struct rte_mempool *mp, unsigned int max_objs,
+			     void *vaddr, rte_iova_t iova, size_t len,
+			     rte_mempool_populate_obj_cb_t *obj_cb,
+			     void *obj_cb_arg);
+
+/**
  * @internal wrapper for mempool_ops free callback.
  *
  * @param mp
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 26908cc..1a7f39f 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -60,6 +60,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->get_capabilities = h->get_capabilities;
 	ops->register_memory_area = h->register_memory_area;
 	ops->calc_mem_size = h->calc_mem_size;
+	ops->populate = h->populate;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -141,6 +142,26 @@ rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
 	return ops->calc_mem_size(mp, obj_num, pg_shift, min_chunk_size, align);
 }
 
+/* wrapper to populate memory pool objects using provided memory chunk */
+int
+rte_mempool_ops_populate(struct rte_mempool *mp, unsigned int max_objs,
+				void *vaddr, rte_iova_t iova, size_t len,
+				rte_mempool_populate_obj_cb_t *obj_cb,
+				void *obj_cb_arg)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+
+	if (ops->populate == NULL)
+		return rte_mempool_op_populate_default(mp, max_objs, vaddr,
+						       iova, len, obj_cb,
+						       obj_cb_arg);
+
+	return ops->populate(mp, max_objs, vaddr, iova, len, obj_cb,
+			     obj_cb_arg);
+}
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c
index 57fe79b..57295f7 100644
--- a/lib/librte_mempool/rte_mempool_ops_default.c
+++ b/lib/librte_mempool/rte_mempool_ops_default.c
@@ -36,3 +36,27 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
 
 	return mem_size;
 }
+
+int
+rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int max_objs,
+		void *vaddr, rte_iova_t iova, size_t len,
+		rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
+{
+	size_t total_elt_sz;
+	size_t off;
+	unsigned int i;
+	void *obj;
+
+	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
+
+	for (off = 0, i = 0; off + total_elt_sz <= len && i < max_objs; i++) {
+		off += mp->header_size;
+		obj = (char *)vaddr + off;
+		obj_cb(mp, obj_cb_arg, obj,
+		       (iova == RTE_BAD_IOVA) ? RTE_BAD_IOVA : (iova + off));
+		rte_mempool_ops_enqueue_bulk(mp, &obj, 1);
+		off += mp->elt_size + mp->trailer_size;
+	}
+
+	return i;
+}
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index e2a054b..90e79ec 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -56,6 +56,7 @@ DPDK_18.05 {
 	global:
 
 	rte_mempool_op_calc_mem_size_default;
+	rte_mempool_op_populate_default;
 
 } DPDK_17.11;
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v1 3/9] mempool: remove callback to get capabilities
  2018-03-10 15:39   ` [PATCH v1 0/9] mempool: prepare to add bucket driver Andrew Rybchenko
  2018-03-10 15:39     ` [PATCH v1 1/9] mempool: add op to calculate memory size to be allocated Andrew Rybchenko
  2018-03-10 15:39     ` [PATCH v1 2/9] mempool: add op to populate objects using provided memory Andrew Rybchenko
@ 2018-03-10 15:39     ` Andrew Rybchenko
  2018-03-14 14:40       ` Burakov, Anatoly
  2018-03-19 17:06       ` Olivier Matz
  2018-03-10 15:39     ` [PATCH v1 4/9] mempool: deprecate xmem functions Andrew Rybchenko
                       ` (7 subsequent siblings)
  10 siblings, 2 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-10 15:39 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ

The callback was introduced to let generic code to know octeontx
mempool driver requirements to use single physically contiguous
memory chunk to store all objects and align object address to
total object size. Now these requirements are met using a new
callbacks to calculate required memory chunk size and to populate
objects using provided memory chunk.

These capability flags are not used anywhere else.

Restricting capabilities to flags is not generic and likely to
be insufficient to describe mempool driver features. If required
in the future, API which returns structured information may be
added.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
RFCv2 -> v1:
 - squash mempool/octeontx patches to add calc_mem_size and populate
   callbacks to this one in order to avoid breakages in the middle of
   patchset
 - advertise API changes in release notes

 doc/guides/rel_notes/deprecation.rst            |  1 -
 doc/guides/rel_notes/release_18_05.rst          | 11 +++++
 drivers/mempool/octeontx/rte_mempool_octeontx.c | 59 +++++++++++++++++++++----
 lib/librte_mempool/rte_mempool.c                | 44 ++----------------
 lib/librte_mempool/rte_mempool.h                | 52 +---------------------
 lib/librte_mempool/rte_mempool_ops.c            | 14 ------
 lib/librte_mempool/rte_mempool_ops_default.c    | 15 +------
 lib/librte_mempool/rte_mempool_version.map      |  1 -
 8 files changed, 68 insertions(+), 129 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index c06fc67..4deed9a 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -70,7 +70,6 @@ Deprecation Notices
 
   The following changes are planned:
 
-  - removal of ``get_capabilities`` mempool ops and related flags.
   - substitute ``register_memory_area`` with ``populate`` ops.
   - addition of new op to allocate contiguous
     block of objects if underlying driver supports it.
diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index abaefe5..c50f26c 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -66,6 +66,14 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =========================================================
 
+* **Removed mempool capability flags and related functions.**
+
+  Flags ``MEMPOOL_F_CAPA_PHYS_CONTIG`` and
+  ``MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS`` were used by octeontx mempool
+  driver to customize generic mempool library behaviour.
+  Now the new driver callbacks ``calc_mem_size`` and ``populate`` may be
+  used to achieve it without specific knowledge in the generic code.
+
 
 ABI Changes
 -----------
@@ -86,6 +94,9 @@ ABI Changes
   to allow to customize required memory size calculation.
   A new callback ``populate`` has been added to ``rte_mempool_ops``
   to allow to customize objects population.
+  Callback ``get_capabilities`` has been removed from ``rte_mempool_ops``
+  since its features are covered by ``calc_mem_size`` and ``populate``
+  callbacks.
 
 
 Removed Items
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index d143d05..f2c4f6a 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -126,14 +126,29 @@ octeontx_fpavf_get_count(const struct rte_mempool *mp)
 	return octeontx_fpa_bufpool_free_count(pool);
 }
 
-static int
-octeontx_fpavf_get_capabilities(const struct rte_mempool *mp,
-				unsigned int *flags)
+static ssize_t
+octeontx_fpavf_calc_mem_size(const struct rte_mempool *mp,
+			     uint32_t obj_num, uint32_t pg_shift,
+			     size_t *min_chunk_size, size_t *align)
 {
-	RTE_SET_USED(mp);
-	*flags |= (MEMPOOL_F_CAPA_PHYS_CONTIG |
-			MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS);
-	return 0;
+	ssize_t mem_size;
+
+	/*
+	 * Simply need space for one more object to be able to
+	 * fullfil alignment requirements.
+	 */
+	mem_size = rte_mempool_op_calc_mem_size_default(mp, obj_num + 1,
+							pg_shift,
+							min_chunk_size, align);
+	if (mem_size >= 0) {
+		/*
+		 * Memory area which contains objects must be physically
+		 * contiguous.
+		 */
+		*min_chunk_size = mem_size;
+	}
+
+	return mem_size;
 }
 
 static int
@@ -150,6 +165,33 @@ octeontx_fpavf_register_memory_area(const struct rte_mempool *mp,
 	return octeontx_fpavf_pool_set_range(pool_bar, len, vaddr, gpool);
 }
 
+static int
+octeontx_fpavf_populate(struct rte_mempool *mp, unsigned int max_objs,
+			void *vaddr, rte_iova_t iova, size_t len,
+			rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
+{
+	size_t total_elt_sz;
+	size_t off;
+
+	if (iova == RTE_BAD_IOVA)
+		return -EINVAL;
+
+	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
+
+	/* align object start address to a multiple of total_elt_sz */
+	off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
+
+	if (len < off)
+		return -EINVAL;
+
+	vaddr = (char *)vaddr + off;
+	iova += off;
+	len -= off;
+
+	return rte_mempool_op_populate_default(mp, max_objs, vaddr, iova, len,
+					       obj_cb, obj_cb_arg);
+}
+
 static struct rte_mempool_ops octeontx_fpavf_ops = {
 	.name = "octeontx_fpavf",
 	.alloc = octeontx_fpavf_alloc,
@@ -157,8 +199,9 @@ static struct rte_mempool_ops octeontx_fpavf_ops = {
 	.enqueue = octeontx_fpavf_enqueue,
 	.dequeue = octeontx_fpavf_dequeue,
 	.get_count = octeontx_fpavf_get_count,
-	.get_capabilities = octeontx_fpavf_get_capabilities,
 	.register_memory_area = octeontx_fpavf_register_memory_area,
+	.calc_mem_size = octeontx_fpavf_calc_mem_size,
+	.populate = octeontx_fpavf_populate,
 };
 
 MEMPOOL_REGISTER_OPS(octeontx_fpavf_ops);
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index ed0e982..fdcda45 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -208,15 +208,9 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  */
 size_t
 rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
-		      unsigned int flags)
+		      __rte_unused unsigned int flags)
 {
 	size_t obj_per_page, pg_num, pg_sz;
-	unsigned int mask;
-
-	mask = MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS | MEMPOOL_F_CAPA_PHYS_CONTIG;
-	if ((flags & mask) == mask)
-		/* alignment need one additional object */
-		elt_num += 1;
 
 	if (total_elt_sz == 0)
 		return 0;
@@ -240,18 +234,12 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
 ssize_t
 rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const rte_iova_t iova[], uint32_t pg_num,
-	uint32_t pg_shift, unsigned int flags)
+	uint32_t pg_shift, __rte_unused unsigned int flags)
 {
 	uint32_t elt_cnt = 0;
 	rte_iova_t start, end;
 	uint32_t iova_idx;
 	size_t pg_sz = (size_t)1 << pg_shift;
-	unsigned int mask;
-
-	mask = MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS | MEMPOOL_F_CAPA_PHYS_CONTIG;
-	if ((flags & mask) == mask)
-		/* alignment need one additional object */
-		elt_num += 1;
 
 	/* if iova is NULL, assume contiguous memory */
 	if (iova == NULL) {
@@ -330,8 +318,6 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
 	void *opaque)
 {
-	unsigned total_elt_sz;
-	unsigned int mp_capa_flags;
 	unsigned i = 0;
 	size_t off;
 	struct rte_mempool_memhdr *memhdr;
@@ -354,27 +340,6 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	if (mp->populated_size >= mp->size)
 		return -ENOSPC;
 
-	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
-
-	/* Get mempool capabilities */
-	mp_capa_flags = 0;
-	ret = rte_mempool_ops_get_capabilities(mp, &mp_capa_flags);
-	if ((ret < 0) && (ret != -ENOTSUP))
-		return ret;
-
-	/* update mempool capabilities */
-	mp->flags |= mp_capa_flags;
-
-	/* Detect pool area has sufficient space for elements */
-	if (mp_capa_flags & MEMPOOL_F_CAPA_PHYS_CONTIG) {
-		if (len < total_elt_sz * mp->size) {
-			RTE_LOG(ERR, MEMPOOL,
-				"pool area %" PRIx64 " not enough\n",
-				(uint64_t)len);
-			return -ENOSPC;
-		}
-	}
-
 	memhdr = rte_zmalloc("MEMPOOL_MEMHDR", sizeof(*memhdr), 0);
 	if (memhdr == NULL)
 		return -ENOMEM;
@@ -386,10 +351,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	memhdr->free_cb = free_cb;
 	memhdr->opaque = opaque;
 
-	if (mp_capa_flags & MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS)
-		/* align object start address to a multiple of total_elt_sz */
-		off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
-	else if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
 		off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
 	else
 		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 49083bd..cd3b229 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -245,24 +245,6 @@ struct rte_mempool {
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
-/**
- * This capability flag is advertised by a mempool handler, if the whole
- * memory area containing the objects must be physically contiguous.
- * Note: This flag should not be passed by application.
- */
-#define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040
-/**
- * This capability flag is advertised by a mempool handler. Used for a case
- * where mempool driver wants object start address(vaddr) aligned to block
- * size(/ total element size).
- *
- * Note:
- * - This flag should not be passed by application.
- *   Flag used for mempool driver only.
- * - Mempool driver must also set MEMPOOL_F_CAPA_PHYS_CONTIG flag along with
- *   MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS.
- */
-#define MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS 0x0080
 
 /**
  * @internal When debug is enabled, store some statistics.
@@ -388,12 +370,6 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
 typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
 
 /**
- * Get the mempool capabilities.
- */
-typedef int (*rte_mempool_get_capabilities_t)(const struct rte_mempool *mp,
-		unsigned int *flags);
-
-/**
  * Notify new memory area to mempool.
  */
 typedef int (*rte_mempool_ops_register_memory_area_t)
@@ -433,13 +409,7 @@ typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp,
  * that pages are grouped in subsets of physically continuous pages big
  * enough to store at least one object.
  *
- * If mempool driver requires object addresses to be block size aligned
- * (MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS), space for one extra element is
- * reserved to be able to meet the requirement.
- *
- * Minimum size of memory chunk is either all required space, if
- * capabilities say that whole memory area must be physically contiguous
- * (MEMPOOL_F_CAPA_PHYS_CONTIG), or a maximum of the page size and total
+ * Minimum size of memory chunk is a maximum of the page size and total
  * element size.
  *
  * Required memory chunk alignment is a maximum of page size and cache
@@ -515,10 +485,6 @@ struct rte_mempool_ops {
 	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
 	rte_mempool_get_count get_count; /**< Get qty of available objs. */
 	/**
-	 * Get the mempool capabilities
-	 */
-	rte_mempool_get_capabilities_t get_capabilities;
-	/**
 	 * Notify new memory area to mempool
 	 */
 	rte_mempool_ops_register_memory_area_t register_memory_area;
@@ -644,22 +610,6 @@ unsigned
 rte_mempool_ops_get_count(const struct rte_mempool *mp);
 
 /**
- * @internal wrapper for mempool_ops get_capabilities callback.
- *
- * @param mp [in]
- *   Pointer to the memory pool.
- * @param flags [out]
- *   Pointer to the mempool flags.
- * @return
- *   - 0: Success; The mempool driver has advertised his pool capabilities in
- *   flags param.
- *   - -ENOTSUP - doesn't support get_capabilities ops (valid case).
- *   - Otherwise, pool create fails.
- */
-int
-rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
-					unsigned int *flags);
-/**
  * @internal wrapper for mempool_ops register_memory_area callback.
  * API to notify the mempool handler when a new memory area is added to pool.
  *
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 1a7f39f..6ac669a 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -57,7 +57,6 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->enqueue = h->enqueue;
 	ops->dequeue = h->dequeue;
 	ops->get_count = h->get_count;
-	ops->get_capabilities = h->get_capabilities;
 	ops->register_memory_area = h->register_memory_area;
 	ops->calc_mem_size = h->calc_mem_size;
 	ops->populate = h->populate;
@@ -99,19 +98,6 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp)
 	return ops->get_count(mp);
 }
 
-/* wrapper to get external mempool capabilities. */
-int
-rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
-					unsigned int *flags)
-{
-	struct rte_mempool_ops *ops;
-
-	ops = rte_mempool_get_ops(mp->ops_index);
-
-	RTE_FUNC_PTR_OR_ERR_RET(ops->get_capabilities, -ENOTSUP);
-	return ops->get_capabilities(mp, flags);
-}
-
 /* wrapper to notify new memory area to external mempool */
 int
 rte_mempool_ops_register_memory_area(const struct rte_mempool *mp, char *vaddr,
diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c
index 57295f7..3defc15 100644
--- a/lib/librte_mempool/rte_mempool_ops_default.c
+++ b/lib/librte_mempool/rte_mempool_ops_default.c
@@ -11,26 +11,15 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
 				     uint32_t obj_num, uint32_t pg_shift,
 				     size_t *min_chunk_size, size_t *align)
 {
-	unsigned int mp_flags;
-	int ret;
 	size_t total_elt_sz;
 	size_t mem_size;
 
-	/* Get mempool capabilities */
-	mp_flags = 0;
-	ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
-	if ((ret < 0) && (ret != -ENOTSUP))
-		return ret;
-
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 
 	mem_size = rte_mempool_xmem_size(obj_num, total_elt_sz, pg_shift,
-					 mp->flags | mp_flags);
+					 mp->flags);
 
-	if (mp_flags & MEMPOOL_F_CAPA_PHYS_CONTIG)
-		*min_chunk_size = mem_size;
-	else
-		*min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz);
+	*min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz);
 
 	*align = RTE_MAX((size_t)RTE_CACHE_LINE_SIZE, (size_t)1 << pg_shift);
 
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 90e79ec..42ca4df 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -45,7 +45,6 @@ DPDK_16.07 {
 DPDK_17.11 {
 	global:
 
-	rte_mempool_ops_get_capabilities;
 	rte_mempool_ops_register_memory_area;
 	rte_mempool_populate_iova;
 	rte_mempool_populate_iova_tab;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v1 4/9] mempool: deprecate xmem functions
  2018-03-10 15:39   ` [PATCH v1 0/9] mempool: prepare to add bucket driver Andrew Rybchenko
                       ` (2 preceding siblings ...)
  2018-03-10 15:39     ` [PATCH v1 3/9] mempool: remove callback to get capabilities Andrew Rybchenko
@ 2018-03-10 15:39     ` Andrew Rybchenko
  2018-03-10 15:39     ` [PATCH v1 5/9] mempool/octeontx: prepare to remove register memory area op Andrew Rybchenko
                       ` (6 subsequent siblings)
  10 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-10 15:39 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ

Move rte_mempool_xmem_size() code to internal helper function
since it is required in two places: deprecated rte_mempool_xmem_size()
and non-deprecated rte_mempool_op_calc_mem_size_deafult().

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
RFCv2 -> v1:
 - advertise deprecation in release notes
 - factor out default memory size calculation into non-deprecated
   internal function to avoid usage of deprecated function internally
 - remove test for deprecated functions to address build issue because
   of usage of deprecated functions (it is easy to allow usage of
   deprecated function in Makefile, but very complicated in meson)

 doc/guides/rel_notes/deprecation.rst         |  7 -------
 doc/guides/rel_notes/release_18_05.rst       | 10 +++++++++
 lib/librte_mempool/rte_mempool.c             | 19 ++++++++++++++---
 lib/librte_mempool/rte_mempool.h             | 25 ++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops_default.c |  4 ++--
 test/test/test_mempool.c                     | 31 ----------------------------
 6 files changed, 53 insertions(+), 43 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 4deed9a..473330d 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -60,13 +60,6 @@ Deprecation Notices
   - ``rte_eal_mbuf_default_mempool_ops``
 
 * mempool: several API and ABI changes are planned in v18.05.
-  The following functions, introduced for Xen, which is not supported
-  anymore since v17.11, are hard to use, not used anywhere else in DPDK.
-  Therefore they will be deprecated in v18.05 and removed in v18.08:
-
-  - ``rte_mempool_xmem_create``
-  - ``rte_mempool_xmem_size``
-  - ``rte_mempool_xmem_usage``
 
   The following changes are planned:
 
diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index c50f26c..0244f91 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -74,6 +74,16 @@ API Changes
   Now the new driver callbacks ``calc_mem_size`` and ``populate`` may be
   used to achieve it without specific knowledge in the generic code.
 
+* **Deprecated mempool xmem functions.**
+
+  The following functions, introduced for Xen, which is not supported
+  anymore since v17.11, are hard to use, not used anywhere else in DPDK.
+  Therefore they were deprecated in v18.05 and will be removed in v18.08:
+
+  - ``rte_mempool_xmem_create``
+  - ``rte_mempool_xmem_size``
+  - ``rte_mempool_xmem_usage``
+
 
 ABI Changes
 -----------
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index fdcda45..b57ba2a 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -204,11 +204,13 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 
 
 /*
- * Calculate maximum amount of memory required to store given number of objects.
+ * Internal function to calculate required memory chunk size shared
+ * by default implementation of the corresponding callback and
+ * deprecated external function.
  */
 size_t
-rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
-		      __rte_unused unsigned int flags)
+rte_mempool_calc_mem_size_helper(uint32_t elt_num, size_t total_elt_sz,
+				 uint32_t pg_shift)
 {
 	size_t obj_per_page, pg_num, pg_sz;
 
@@ -228,6 +230,17 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
 }
 
 /*
+ * Calculate maximum amount of memory required to store given number of objects.
+ */
+size_t
+rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
+		      __rte_unused unsigned int flags)
+{
+	return rte_mempool_calc_mem_size_helper(elt_num, total_elt_sz,
+						pg_shift);
+}
+
+/*
  * Calculate how much memory would be actually required with the
  * given memory footprint to store required number of elements.
  */
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index cd3b229..ebfc95c 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -420,6 +420,28 @@ ssize_t rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
 		size_t *min_chunk_size, size_t *align);
 
 /**
+ * @internal Helper function to calculate memory size required to store
+ * specified number of objects in assumption that the memory buffer will
+ * be aligned at page boundary.
+ *
+ * Note that if object size is bigger than page size, then it assumes
+ * that pages are grouped in subsets of physically continuous pages big
+ * enough to store at least one object.
+ *
+ * @param elt_num
+ *   Number of elements.
+ * @param total_elt_sz
+ *   The size of each element, including header and trailer, as returned
+ *   by rte_mempool_calc_obj_size().
+ * @param pg_shift
+ *   LOG2 of the physical pages size. If set to 0, ignore page boundaries.
+ * @return
+ *   Required memory size aligned at page boundary.
+ */
+size_t rte_mempool_calc_mem_size_helper(uint32_t elt_num, size_t total_elt_sz,
+		uint32_t pg_shift);
+
+/**
  * Function to be called for each populated object.
  *
  * @param[in] mp
@@ -905,6 +927,7 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
  *   The pointer to the new allocated mempool, on success. NULL on error
  *   with rte_errno set appropriately. See rte_mempool_create() for details.
  */
+__rte_deprecated
 struct rte_mempool *
 rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 		unsigned cache_size, unsigned private_data_size,
@@ -1667,6 +1690,7 @@ uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  * @return
  *   Required memory size aligned at page boundary.
  */
+__rte_deprecated
 size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
 	uint32_t pg_shift, unsigned int flags);
 
@@ -1698,6 +1722,7 @@ size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
  *   buffer is too small, return a negative value whose absolute value
  *   is the actual number of elements that can be stored in that buffer.
  */
+__rte_deprecated
 ssize_t rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const rte_iova_t iova[], uint32_t pg_num,
 	uint32_t pg_shift, unsigned int flags);
diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c
index 3defc15..fd63ca1 100644
--- a/lib/librte_mempool/rte_mempool_ops_default.c
+++ b/lib/librte_mempool/rte_mempool_ops_default.c
@@ -16,8 +16,8 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
 
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 
-	mem_size = rte_mempool_xmem_size(obj_num, total_elt_sz, pg_shift,
-					 mp->flags);
+	mem_size = rte_mempool_calc_mem_size_helper(obj_num, total_elt_sz,
+						    pg_shift);
 
 	*min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz);
 
diff --git a/test/test/test_mempool.c b/test/test/test_mempool.c
index 63f921e..8d29af2 100644
--- a/test/test/test_mempool.c
+++ b/test/test/test_mempool.c
@@ -444,34 +444,6 @@ test_mempool_same_name_twice_creation(void)
 	return 0;
 }
 
-/*
- * Basic test for mempool_xmem functions.
- */
-static int
-test_mempool_xmem_misc(void)
-{
-	uint32_t elt_num, total_size;
-	size_t sz;
-	ssize_t usz;
-
-	elt_num = MAX_KEEP;
-	total_size = rte_mempool_calc_obj_size(MEMPOOL_ELT_SIZE, 0, NULL);
-	sz = rte_mempool_xmem_size(elt_num, total_size, MEMPOOL_PG_SHIFT_MAX,
-					0);
-
-	usz = rte_mempool_xmem_usage(NULL, elt_num, total_size, 0, 1,
-		MEMPOOL_PG_SHIFT_MAX, 0);
-
-	if (sz != (size_t)usz)  {
-		printf("failure @ %s: rte_mempool_xmem_usage(%u, %u) "
-			"returns: %#zx, while expected: %#zx;\n",
-			__func__, elt_num, total_size, sz, (size_t)usz);
-		return -1;
-	}
-
-	return 0;
-}
-
 static void
 walk_cb(struct rte_mempool *mp, void *userdata __rte_unused)
 {
@@ -596,9 +568,6 @@ test_mempool(void)
 	if (test_mempool_same_name_twice_creation() < 0)
 		goto err;
 
-	if (test_mempool_xmem_misc() < 0)
-		goto err;
-
 	/* test the stack handler */
 	if (test_mempool_basic(mp_stack, 1) < 0)
 		goto err;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v1 5/9] mempool/octeontx: prepare to remove register memory area op
  2018-03-10 15:39   ` [PATCH v1 0/9] mempool: prepare to add bucket driver Andrew Rybchenko
                       ` (3 preceding siblings ...)
  2018-03-10 15:39     ` [PATCH v1 4/9] mempool: deprecate xmem functions Andrew Rybchenko
@ 2018-03-10 15:39     ` Andrew Rybchenko
  2018-03-10 15:39     ` [PATCH v1 6/9] mempool/dpaa: " Andrew Rybchenko
                       ` (5 subsequent siblings)
  10 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-10 15:39 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Santosh Shukla, Jerin Jacob

Callback to populate pool objects has all required information and
executed a bit later than register memory area callback.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/mempool/octeontx/rte_mempool_octeontx.c | 25 ++++++++++---------------
 1 file changed, 10 insertions(+), 15 deletions(-)

diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index f2c4f6a..ae038d3 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -152,26 +152,15 @@ octeontx_fpavf_calc_mem_size(const struct rte_mempool *mp,
 }
 
 static int
-octeontx_fpavf_register_memory_area(const struct rte_mempool *mp,
-				    char *vaddr, rte_iova_t paddr, size_t len)
-{
-	RTE_SET_USED(paddr);
-	uint8_t gpool;
-	uintptr_t pool_bar;
-
-	gpool = octeontx_fpa_bufpool_gpool(mp->pool_id);
-	pool_bar = mp->pool_id & ~(uint64_t)FPA_GPOOL_MASK;
-
-	return octeontx_fpavf_pool_set_range(pool_bar, len, vaddr, gpool);
-}
-
-static int
 octeontx_fpavf_populate(struct rte_mempool *mp, unsigned int max_objs,
 			void *vaddr, rte_iova_t iova, size_t len,
 			rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
 {
 	size_t total_elt_sz;
 	size_t off;
+	uint8_t gpool;
+	uintptr_t pool_bar;
+	int ret;
 
 	if (iova == RTE_BAD_IOVA)
 		return -EINVAL;
@@ -188,6 +177,13 @@ octeontx_fpavf_populate(struct rte_mempool *mp, unsigned int max_objs,
 	iova += off;
 	len -= off;
 
+	gpool = octeontx_fpa_bufpool_gpool(mp->pool_id);
+	pool_bar = mp->pool_id & ~(uint64_t)FPA_GPOOL_MASK;
+
+	ret = octeontx_fpavf_pool_set_range(pool_bar, len, vaddr, gpool);
+	if (ret < 0)
+		return ret;
+
 	return rte_mempool_op_populate_default(mp, max_objs, vaddr, iova, len,
 					       obj_cb, obj_cb_arg);
 }
@@ -199,7 +195,6 @@ static struct rte_mempool_ops octeontx_fpavf_ops = {
 	.enqueue = octeontx_fpavf_enqueue,
 	.dequeue = octeontx_fpavf_dequeue,
 	.get_count = octeontx_fpavf_get_count,
-	.register_memory_area = octeontx_fpavf_register_memory_area,
 	.calc_mem_size = octeontx_fpavf_calc_mem_size,
 	.populate = octeontx_fpavf_populate,
 };
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v1 6/9] mempool/dpaa: prepare to remove register memory area op
  2018-03-10 15:39   ` [PATCH v1 0/9] mempool: prepare to add bucket driver Andrew Rybchenko
                       ` (4 preceding siblings ...)
  2018-03-10 15:39     ` [PATCH v1 5/9] mempool/octeontx: prepare to remove register memory area op Andrew Rybchenko
@ 2018-03-10 15:39     ` Andrew Rybchenko
  2018-03-10 15:39     ` [PATCH v1 7/9] mempool: remove callback to register memory area Andrew Rybchenko
                       ` (4 subsequent siblings)
  10 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-10 15:39 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Hemant Agrawal, Shreyansh Jain

Populate mempool driver callback is executed a bit later than
register memory area, provides the same information and will
substitute the later since it gives more flexibility and in addition
to notification about memory area allows to customize how mempool
objects are stored in memory.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/mempool/dpaa/dpaa_mempool.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c
index fb3b6ba..a2bbb39 100644
--- a/drivers/mempool/dpaa/dpaa_mempool.c
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -264,10 +264,9 @@ dpaa_mbuf_get_count(const struct rte_mempool *mp)
 }
 
 static int
-dpaa_register_memory_area(const struct rte_mempool *mp,
-			  char *vaddr __rte_unused,
-			  rte_iova_t paddr __rte_unused,
-			  size_t len)
+dpaa_populate(const struct rte_mempool *mp, unsigned int max_objs,
+	      char *vaddr, rte_iova_t paddr, size_t len,
+	      rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
 {
 	struct dpaa_bp_info *bp_info;
 	unsigned int total_elt_sz;
@@ -289,7 +288,9 @@ dpaa_register_memory_area(const struct rte_mempool *mp,
 	if (len >= total_elt_sz * mp->size)
 		bp_info->flags |= DPAA_MPOOL_SINGLE_SEGMENT;
 
-	return 0;
+	return rte_mempool_op_populate_default(mp, max_objs, vaddr, paddr, len,
+					       obj_cb, obj_cb_arg);
+
 }
 
 struct rte_mempool_ops dpaa_mpool_ops = {
@@ -299,7 +300,7 @@ struct rte_mempool_ops dpaa_mpool_ops = {
 	.enqueue = dpaa_mbuf_free_bulk,
 	.dequeue = dpaa_mbuf_alloc_bulk,
 	.get_count = dpaa_mbuf_get_count,
-	.register_memory_area = dpaa_register_memory_area,
+	.populate = dpaa_populate,
 };
 
 MEMPOOL_REGISTER_OPS(dpaa_mpool_ops);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v1 7/9] mempool: remove callback to register memory area
  2018-03-10 15:39   ` [PATCH v1 0/9] mempool: prepare to add bucket driver Andrew Rybchenko
                       ` (5 preceding siblings ...)
  2018-03-10 15:39     ` [PATCH v1 6/9] mempool/dpaa: " Andrew Rybchenko
@ 2018-03-10 15:39     ` Andrew Rybchenko
  2018-03-10 15:39     ` [PATCH v1 8/9] mempool: ensure the mempool is initialized before populating Andrew Rybchenko
                       ` (3 subsequent siblings)
  10 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-10 15:39 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ

The callback is not required any more since there is a new callback
to populate objects using provided memory area which provides
the same information.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
RFCv2 -> v1:
 - advertise ABI changes in release notes

 doc/guides/rel_notes/deprecation.rst       |  1 -
 doc/guides/rel_notes/release_18_05.rst     |  2 ++
 lib/librte_mempool/rte_mempool.c           |  5 -----
 lib/librte_mempool/rte_mempool.h           | 31 ------------------------------
 lib/librte_mempool/rte_mempool_ops.c       | 14 --------------
 lib/librte_mempool/rte_mempool_version.map |  1 -
 6 files changed, 2 insertions(+), 52 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 473330d..5301259 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -63,7 +63,6 @@ Deprecation Notices
 
   The following changes are planned:
 
-  - substitute ``register_memory_area`` with ``populate`` ops.
   - addition of new op to allocate contiguous
     block of objects if underlying driver supports it.
 
diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index 0244f91..9d40db1 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -107,6 +107,8 @@ ABI Changes
   Callback ``get_capabilities`` has been removed from ``rte_mempool_ops``
   since its features are covered by ``calc_mem_size`` and ``populate``
   callbacks.
+  Callback ``register_memory_area`` has been removed from ``rte_mempool_ops``
+  since the new callback ``populate`` may be used instead of it.
 
 
 Removed Items
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index b57ba2a..844d907 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -344,11 +344,6 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 		mp->flags |= MEMPOOL_F_POOL_CREATED;
 	}
 
-	/* Notify memory area to mempool */
-	ret = rte_mempool_ops_register_memory_area(mp, vaddr, iova, len);
-	if (ret != -ENOTSUP && ret < 0)
-		return ret;
-
 	/* mempool is already populated */
 	if (mp->populated_size >= mp->size)
 		return -ENOSPC;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index ebfc95c..5f63f86 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -370,12 +370,6 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
 typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
 
 /**
- * Notify new memory area to mempool.
- */
-typedef int (*rte_mempool_ops_register_memory_area_t)
-(const struct rte_mempool *mp, char *vaddr, rte_iova_t iova, size_t len);
-
-/**
  * Calculate memory size required to store given number of objects.
  *
  * @param[in] mp
@@ -507,10 +501,6 @@ struct rte_mempool_ops {
 	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
 	rte_mempool_get_count get_count; /**< Get qty of available objs. */
 	/**
-	 * Notify new memory area to mempool
-	 */
-	rte_mempool_ops_register_memory_area_t register_memory_area;
-	/**
 	 * Optional callback to calculate memory size required to
 	 * store specified number of objects.
 	 */
@@ -632,27 +622,6 @@ unsigned
 rte_mempool_ops_get_count(const struct rte_mempool *mp);
 
 /**
- * @internal wrapper for mempool_ops register_memory_area callback.
- * API to notify the mempool handler when a new memory area is added to pool.
- *
- * @param mp
- *   Pointer to the memory pool.
- * @param vaddr
- *   Pointer to the buffer virtual address.
- * @param iova
- *   Pointer to the buffer IO address.
- * @param len
- *   Pool size.
- * @return
- *   - 0: Success;
- *   - -ENOTSUP - doesn't support register_memory_area ops (valid error case).
- *   - Otherwise, rte_mempool_populate_phys fails thus pool create fails.
- */
-int
-rte_mempool_ops_register_memory_area(const struct rte_mempool *mp,
-				char *vaddr, rte_iova_t iova, size_t len);
-
-/**
  * @internal wrapper for mempool_ops calc_mem_size callback.
  * API to calculate size of memory required to store specified number of
  * object.
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 6ac669a..ea9be1e 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -57,7 +57,6 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->enqueue = h->enqueue;
 	ops->dequeue = h->dequeue;
 	ops->get_count = h->get_count;
-	ops->register_memory_area = h->register_memory_area;
 	ops->calc_mem_size = h->calc_mem_size;
 	ops->populate = h->populate;
 
@@ -99,19 +98,6 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp)
 }
 
 /* wrapper to notify new memory area to external mempool */
-int
-rte_mempool_ops_register_memory_area(const struct rte_mempool *mp, char *vaddr,
-					rte_iova_t iova, size_t len)
-{
-	struct rte_mempool_ops *ops;
-
-	ops = rte_mempool_get_ops(mp->ops_index);
-
-	RTE_FUNC_PTR_OR_ERR_RET(ops->register_memory_area, -ENOTSUP);
-	return ops->register_memory_area(mp, vaddr, iova, len);
-}
-
-/* wrapper to notify new memory area to external mempool */
 ssize_t
 rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
 				uint32_t obj_num, uint32_t pg_shift,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 42ca4df..f539a5a 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -45,7 +45,6 @@ DPDK_16.07 {
 DPDK_17.11 {
 	global:
 
-	rte_mempool_ops_register_memory_area;
 	rte_mempool_populate_iova;
 	rte_mempool_populate_iova_tab;
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v1 8/9] mempool: ensure the mempool is initialized before populating
  2018-03-10 15:39   ` [PATCH v1 0/9] mempool: prepare to add bucket driver Andrew Rybchenko
                       ` (6 preceding siblings ...)
  2018-03-10 15:39     ` [PATCH v1 7/9] mempool: remove callback to register memory area Andrew Rybchenko
@ 2018-03-10 15:39     ` Andrew Rybchenko
  2018-03-19 17:06       ` Olivier Matz
  2018-03-10 15:39     ` [PATCH v1 9/9] mempool: support flushing the default cache of the mempool Andrew Rybchenko
                       ` (2 subsequent siblings)
  10 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-10 15:39 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Callback to calculate required memory area size may require mempool
driver data to be already allocated and initialized.

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
RFCv2 -> v1:
 - rename helper function as mempool_ops_alloc_once()

 lib/librte_mempool/rte_mempool.c | 29 ++++++++++++++++++++++-------
 1 file changed, 22 insertions(+), 7 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 844d907..12085cd 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -322,6 +322,21 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
 	}
 }
 
+static int
+mempool_ops_alloc_once(struct rte_mempool *mp)
+{
+	int ret;
+
+	/* create the internal ring if not already done */
+	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
+		ret = rte_mempool_ops_alloc(mp);
+		if (ret != 0)
+			return ret;
+		mp->flags |= MEMPOOL_F_POOL_CREATED;
+	}
+	return 0;
+}
+
 /* Add objects in the pool, using a physically contiguous memory
  * zone. Return the number of objects added, or a negative value
  * on error.
@@ -336,13 +351,9 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	struct rte_mempool_memhdr *memhdr;
 	int ret;
 
-	/* create the internal ring if not already done */
-	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
-		ret = rte_mempool_ops_alloc(mp);
-		if (ret != 0)
-			return ret;
-		mp->flags |= MEMPOOL_F_POOL_CREATED;
-	}
+	ret = mempool_ops_alloc_once(mp);
+	if (ret != 0)
+		return ret;
 
 	/* mempool is already populated */
 	if (mp->populated_size >= mp->size)
@@ -515,6 +526,10 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	unsigned mz_id, n;
 	int ret;
 
+	ret = mempool_ops_alloc_once(mp);
+	if (ret != 0)
+		return ret;
+
 	/* mempool must not be populated */
 	if (mp->nb_mem_chunks != 0)
 		return -EEXIST;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v1 9/9] mempool: support flushing the default cache of the mempool
  2018-03-10 15:39   ` [PATCH v1 0/9] mempool: prepare to add bucket driver Andrew Rybchenko
                       ` (7 preceding siblings ...)
  2018-03-10 15:39     ` [PATCH v1 8/9] mempool: ensure the mempool is initialized before populating Andrew Rybchenko
@ 2018-03-10 15:39     ` Andrew Rybchenko
  2018-03-14 15:49     ` [PATCH v1 0/9] mempool: prepare to add bucket driver santosh
  2018-03-19 17:03     ` Olivier Matz
  10 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-10 15:39 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Mempool get/put API cares about cache itself, but sometimes it is
required to flush the cache explicitly.

The function is moved in the file since it now requires
rte_mempool_default_cache().

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 lib/librte_mempool/rte_mempool.h | 36 ++++++++++++++++++++----------------
 1 file changed, 20 insertions(+), 16 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 5f63f86..4ecb2f6 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -1159,22 +1159,6 @@ void
 rte_mempool_cache_free(struct rte_mempool_cache *cache);
 
 /**
- * Flush a user-owned mempool cache to the specified mempool.
- *
- * @param cache
- *   A pointer to the mempool cache.
- * @param mp
- *   A pointer to the mempool.
- */
-static __rte_always_inline void
-rte_mempool_cache_flush(struct rte_mempool_cache *cache,
-			struct rte_mempool *mp)
-{
-	rte_mempool_ops_enqueue_bulk(mp, cache->objs, cache->len);
-	cache->len = 0;
-}
-
-/**
  * Get a pointer to the per-lcore default mempool cache.
  *
  * @param mp
@@ -1197,6 +1181,26 @@ rte_mempool_default_cache(struct rte_mempool *mp, unsigned lcore_id)
 }
 
 /**
+ * Flush a user-owned mempool cache to the specified mempool.
+ *
+ * @param cache
+ *   A pointer to the mempool cache.
+ * @param mp
+ *   A pointer to the mempool.
+ */
+static __rte_always_inline void
+rte_mempool_cache_flush(struct rte_mempool_cache *cache,
+			struct rte_mempool *mp)
+{
+	if (cache == NULL)
+		cache = rte_mempool_default_cache(mp, rte_lcore_id());
+	if (cache == NULL || cache->len == 0)
+		return;
+	rte_mempool_ops_enqueue_bulk(mp, cache->objs, cache->len);
+	cache->len = 0;
+}
+
+/**
  * @internal Put several objects back in the mempool; used internally.
  * @param mp
  *   A pointer to the mempool structure.
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* Re: [RFC v2 03/17] mempool/octeontx: add callback to calculate memory size
  2018-02-01 13:40         ` santosh
@ 2018-03-10 15:49           ` Andrew Rybchenko
  2018-03-11  6:31             ` santosh
  0 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-10 15:49 UTC (permalink / raw)
  To: santosh, dev; +Cc: Olivier MATZ, Jerin Jacob

Hi Santosh,

On 02/01/2018 04:40 PM, santosh wrote:
> On Thursday 01 February 2018 03:31 PM, santosh wrote:
>> Hi Andrew,
>>
>>
>> On Thursday 01 February 2018 11:48 AM, Jacob, Jerin wrote:
>>> The driver requires one and only one physically contiguous
>>> memory chunk for all objects.
>>>
>>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>>> ---
>>>    drivers/mempool/octeontx/rte_mempool_octeontx.c | 25 +++++++++++++++++++++++++
>>>    1 file changed, 25 insertions(+)
>>>
>>> diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c
>>> b/drivers/mempool/octeontx/rte_mempool_octeontx.c
>>> index d143d05..4ec5efe 100644
>>> --- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
>>> +++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
>>> @@ -136,6 +136,30 @@ octeontx_fpavf_get_capabilities(const struct rte_mempool *mp,
>>>            return 0;
>>>    }
>>>
>>> +static ssize_t
>>> +octeontx_fpavf_calc_mem_size(const struct rte_mempool *mp,
>>> +                            uint32_t obj_num, uint32_t pg_shift,
>>> +                            size_t *min_chunk_size, size_t *align)
>>> +{
>>> +       ssize_t mem_size;
>>> +
>>> +       /*
>>> +        * Simply need space for one more object to be able to
>>> +        * fullfil alignment requirements.
>>> +        */
>>> +       mem_size = rte_mempool_calc_mem_size_def(mp, obj_num + 1, pg_shift,
>>> +
>> I think, you don't need that (obj_num + 1) as because
>> rte_xmem_calc_int() will be checking flags for
>> _ALIGNED + _CAPA_PHYS_CONFIG i.e..
>>
>> 	mask = MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS | MEMPOOL_F_CAPA_PHYS_CONTIG;
>> 	if ((flags & mask) == mask)
>> 		/* alignment need one additional object */
>> 		elt_num += 1;
> ok, You are removing above check in v2- 06/17, so ignore above comment.
> I suggest to move this patch and keep it after 06/17. Or perhaps keep
> common mempool changes first then followed by driver specifics changes in your
> v3 series.

Finally I've decided to include these changes into the patch which
removes get_capabilities [1]. Please, take a look at suggested version.
I think it is the most transparent solution. Otherwise it is hard
to avoid the issue found by you above.

I'm sorry, I've forgot to include you in CC.

[1] https://dpdk.org/dev/patchwork/patch/35934/

Thanks,
Andrew.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [RFC v2 03/17] mempool/octeontx: add callback to calculate memory size
  2018-03-10 15:49           ` Andrew Rybchenko
@ 2018-03-11  6:31             ` santosh
  0 siblings, 0 replies; 197+ messages in thread
From: santosh @ 2018-03-11  6:31 UTC (permalink / raw)
  To: Andrew Rybchenko, dev; +Cc: Olivier MATZ, Jerin Jacob

Hi Andrew,


On Saturday 10 March 2018 09:19 PM, Andrew Rybchenko wrote:
> Hi Santosh,
>
> On 02/01/2018 04:40 PM, santosh wrote:
>> On Thursday 01 February 2018 03:31 PM, santosh wrote:
>>> Hi Andrew,
>>>
>>>
>>> On Thursday 01 February 2018 11:48 AM, Jacob, Jerin wrote:
>>>> The driver requires one and only one physically contiguous
>>>> memory chunk for all objects.
>>>>
>>>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>>>> ---
>>>>    drivers/mempool/octeontx/rte_mempool_octeontx.c | 25 +++++++++++++++++++++++++
>>>>    1 file changed, 25 insertions(+)
>>>>
>>>> diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c
>>>> b/drivers/mempool/octeontx/rte_mempool_octeontx.c
>>>> index d143d05..4ec5efe 100644
>>>> --- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
>>>> +++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
>>>> @@ -136,6 +136,30 @@ octeontx_fpavf_get_capabilities(const struct rte_mempool *mp,
>>>>            return 0;
>>>>    }
>>>>
>>>> +static ssize_t
>>>> +octeontx_fpavf_calc_mem_size(const struct rte_mempool *mp,
>>>> +                            uint32_t obj_num, uint32_t pg_shift,
>>>> +                            size_t *min_chunk_size, size_t *align)
>>>> +{
>>>> +       ssize_t mem_size;
>>>> +
>>>> +       /*
>>>> +        * Simply need space for one more object to be able to
>>>> +        * fullfil alignment requirements.
>>>> +        */
>>>> +       mem_size = rte_mempool_calc_mem_size_def(mp, obj_num + 1, pg_shift,
>>>> +
>>> I think, you don't need that (obj_num + 1) as because
>>> rte_xmem_calc_int() will be checking flags for
>>> _ALIGNED + _CAPA_PHYS_CONFIG i.e..
>>>
>>>     mask = MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS | MEMPOOL_F_CAPA_PHYS_CONTIG;
>>>     if ((flags & mask) == mask)
>>>         /* alignment need one additional object */
>>>         elt_num += 1;
>> ok, You are removing above check in v2- 06/17, so ignore above comment.
>> I suggest to move this patch and keep it after 06/17. Or perhaps keep
>> common mempool changes first then followed by driver specifics changes in your
>> v3 series.
>
> Finally I've decided to include these changes into the patch which
> removes get_capabilities [1]. Please, take a look at suggested version.
> I think it is the most transparent solution. Otherwise it is hard
> to avoid the issue found by you above.
>
Sure. I'll review.

> I'm sorry, I've forgot to include you in CC.
>
NP,

Thanks.

> [1] https://dpdk.org/dev/patchwork/patch/35934/
>
> Thanks,
> Andrew.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 1/9] mempool: add op to calculate memory size to be allocated
  2018-03-10 15:39     ` [PATCH v1 1/9] mempool: add op to calculate memory size to be allocated Andrew Rybchenko
@ 2018-03-11 12:51       ` santosh
  2018-03-12  6:53         ` Andrew Rybchenko
  2018-03-19 17:03       ` Olivier Matz
  1 sibling, 1 reply; 197+ messages in thread
From: santosh @ 2018-03-11 12:51 UTC (permalink / raw)
  To: Andrew Rybchenko, dev; +Cc: Olivier MATZ

Hi Andrew,


On Saturday 10 March 2018 09:09 PM, Andrew Rybchenko wrote:
> Size of memory chunk required to populate mempool objects depends
> on how objects are stored in the memory. Different mempool drivers
> may have different requirements and a new operation allows to
> calculate memory size in accordance with driver requirements and
> advertise requirements on minimum memory chunk size and alignment
> in a generic way.
>
> Bump ABI version since the patch breaks it.
>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> ---
> RFCv2 -> v1:
>  - move default calc_mem_size callback to rte_mempool_ops_default.c
>  - add ABI changes to release notes
>  - name default callback consistently: rte_mempool_op_<callback>_default()
>  - bump ABI version since it is the first patch which breaks ABI
>  - describe default callback behaviour in details
>  - avoid introduction of internal function to cope with depration
>    (keep it to deprecation patch)
>  - move cache-line or page boundary chunk alignment to default callback
>  - highlight that min_chunk_size and align parameters are output only
>
[...]

> diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c
> new file mode 100644
> index 0000000..57fe79b
> --- /dev/null
> +++ b/lib/librte_mempool/rte_mempool_ops_default.c
> @@ -0,0 +1,38 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2016 Intel Corporation.
> + * Copyright(c) 2016 6WIND S.A.
> + * Copyright(c) 2018 Solarflare Communications Inc.
> + */
> +
> +#include <rte_mempool.h>
> +
> +ssize_t
> +rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
> +				     uint32_t obj_num, uint32_t pg_shift,
> +				     size_t *min_chunk_size, size_t *align)
> +{
> +	unsigned int mp_flags;
> +	int ret;
> +	size_t total_elt_sz;
> +	size_t mem_size;
> +
> +	/* Get mempool capabilities */
> +	mp_flags = 0;
> +	ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
> +	if ((ret < 0) && (ret != -ENOTSUP))
> +		return ret;
> +
> +	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
> +
> +	mem_size = rte_mempool_xmem_size(obj_num, total_elt_sz, pg_shift,
> +					 mp->flags | mp_flags);
> +

Looks ok to me except a nit:
(mp->flags | mp_flags) style expression is to differentiate that
mp_flags holds driver specific flag like BLK_ALIGN and mp->flags
has appl specific flags.. is it so? If not then why not simply
do like:
mp->flags |= mp_flags.

Thanks.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 1/9] mempool: add op to calculate memory size to be allocated
  2018-03-11 12:51       ` santosh
@ 2018-03-12  6:53         ` Andrew Rybchenko
  0 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-12  6:53 UTC (permalink / raw)
  To: santosh, dev; +Cc: Olivier MATZ

On 03/11/2018 03:51 PM, santosh wrote:
> Hi Andrew,
>
>
> On Saturday 10 March 2018 09:09 PM, Andrew Rybchenko wrote:
>> Size of memory chunk required to populate mempool objects depends
>> on how objects are stored in the memory. Different mempool drivers
>> may have different requirements and a new operation allows to
>> calculate memory size in accordance with driver requirements and
>> advertise requirements on minimum memory chunk size and alignment
>> in a generic way.
>>
>> Bump ABI version since the patch breaks it.
>>
>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>> ---
>> RFCv2 -> v1:
>>   - move default calc_mem_size callback to rte_mempool_ops_default.c
>>   - add ABI changes to release notes
>>   - name default callback consistently: rte_mempool_op_<callback>_default()
>>   - bump ABI version since it is the first patch which breaks ABI
>>   - describe default callback behaviour in details
>>   - avoid introduction of internal function to cope with depration
>>     (keep it to deprecation patch)
>>   - move cache-line or page boundary chunk alignment to default callback
>>   - highlight that min_chunk_size and align parameters are output only
>>
> [...]
>
>> diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c
>> new file mode 100644
>> index 0000000..57fe79b
>> --- /dev/null
>> +++ b/lib/librte_mempool/rte_mempool_ops_default.c
>> @@ -0,0 +1,38 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause
>> + * Copyright(c) 2016 Intel Corporation.
>> + * Copyright(c) 2016 6WIND S.A.
>> + * Copyright(c) 2018 Solarflare Communications Inc.
>> + */
>> +
>> +#include <rte_mempool.h>
>> +
>> +ssize_t
>> +rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
>> +				     uint32_t obj_num, uint32_t pg_shift,
>> +				     size_t *min_chunk_size, size_t *align)
>> +{
>> +	unsigned int mp_flags;
>> +	int ret;
>> +	size_t total_elt_sz;
>> +	size_t mem_size;
>> +
>> +	/* Get mempool capabilities */
>> +	mp_flags = 0;
>> +	ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
>> +	if ((ret < 0) && (ret != -ENOTSUP))
>> +		return ret;
>> +
>> +	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
>> +
>> +	mem_size = rte_mempool_xmem_size(obj_num, total_elt_sz, pg_shift,
>> +					 mp->flags | mp_flags);
>> +
> Looks ok to me except a nit:
> (mp->flags | mp_flags) style expression is to differentiate that
> mp_flags holds driver specific flag like BLK_ALIGN and mp->flags
> has appl specific flags.. is it so? If not then why not simply
> do like:
> mp->flags |= mp_flags.

In fact it does not mater a lot since the code is removed in the patch 3.
Here it is required just for consistency. Also, mp argument is a const
which will not allow to change its members.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 3/9] mempool: remove callback to get capabilities
  2018-03-10 15:39     ` [PATCH v1 3/9] mempool: remove callback to get capabilities Andrew Rybchenko
@ 2018-03-14 14:40       ` Burakov, Anatoly
  2018-03-14 16:12         ` Andrew Rybchenko
  2018-03-19 17:06       ` Olivier Matz
  1 sibling, 1 reply; 197+ messages in thread
From: Burakov, Anatoly @ 2018-03-14 14:40 UTC (permalink / raw)
  To: Andrew Rybchenko, dev; +Cc: Olivier MATZ

On 10-Mar-18 3:39 PM, Andrew Rybchenko wrote:
> The callback was introduced to let generic code to know octeontx
> mempool driver requirements to use single physically contiguous
> memory chunk to store all objects and align object address to
> total object size. Now these requirements are met using a new
> callbacks to calculate required memory chunk size and to populate
> objects using provided memory chunk.
> 
> These capability flags are not used anywhere else.
> 
> Restricting capabilities to flags is not generic and likely to
> be insufficient to describe mempool driver features. If required
> in the future, API which returns structured information may be
> added.
> 
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> ---

Just a general comment - it is not enough to describe minimum memchunk 
requirements. With memory hotplug patchset that's hopefully getting 
merged in 18.05, memzones will no longer be guaranteed to be 
IOVA-contiguous. So, if a driver requires its mempool to not only be 
populated from a single memzone, but a single *physically contiguous* 
memzone, going by only callbacks will not do, because whether or not 
something should be a single memzone says nothing about whether this 
memzone has to also be IOVA-contiguous.

So i believe this needs to stay in one form or another.

(also it would be nice to have a flag that a user could pass to 
mempool_create that would force memzone reservation be IOVA-contiguous, 
but that's a topic for another conversation. prime user for this would 
be KNI.)

-- 
Thanks,
Anatoly

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 0/9] mempool: prepare to add bucket driver
  2018-03-10 15:39   ` [PATCH v1 0/9] mempool: prepare to add bucket driver Andrew Rybchenko
                       ` (8 preceding siblings ...)
  2018-03-10 15:39     ` [PATCH v1 9/9] mempool: support flushing the default cache of the mempool Andrew Rybchenko
@ 2018-03-14 15:49     ` santosh
  2018-03-14 15:57       ` Andrew Rybchenko
  2018-03-19 17:03     ` Olivier Matz
  10 siblings, 1 reply; 197+ messages in thread
From: santosh @ 2018-03-14 15:49 UTC (permalink / raw)
  To: Andrew Rybchenko, dev
  Cc: Olivier MATZ, Jerin Jacob, Hemant Agrawal, Shreyansh Jain

Hi Andrew,


On Saturday 10 March 2018 09:09 PM, Andrew Rybchenko wrote:

[...]

> RFCv1 -> RFCv2:
>   - add driver ops to calculate required memory size and populate
>     mempool objects, remove extra flags which were required before
>     to control it
>   - transition of octeontx and dpaa drivers to the new callbacks
>   - change info API to get information from driver required to
>     API user to know contiguous block size
>   - remove get_capabilities (not required any more and may be
>     substituted with more in info get API)
>   - remove register_memory_area since it is substituted with
>     populate callback which can do more
>   - use SPDX tags
>   - avoid all objects affinity to single lcore
>   - fix bucket get_count
>   - deprecate XMEM API
>   - avoid introduction of a new function to flush cache
>   - fix NO_CACHE_ALIGN case in bucket mempool

I'm evaluating your series in octeontx platform.
Noticed a build break for dpaa platform:
  CC dpaa_mempool.o
/home/ubuntu/83xx/dpdk/drivers/mempool/dpaa/dpaa_mempool.c: In function ‘dpaa_populate’:
/home/ubuntu/83xx/dpdk/drivers/mempool/dpaa/dpaa_mempool.c:291:41: error: passing argument 1 of ‘rte_mempool_op_populate_default’ discards ‘const’ qualifier from pointer target type [-Werror=discarded-qualifiers]
  return rte_mempool_op_populate_default(mp, max_objs, vaddr, paddr, len,
                                         ^
In file included from /home/ubuntu/83xx/dpdk/drivers/mempool/dpaa/dpaa_mempool.h:15:0,
                 from /home/ubuntu/83xx/dpdk/drivers/mempool/dpaa/dpaa_mempool.c:28:
/home/ubuntu/83xx/dpdk/build/include/rte_mempool.h:490:5: note: expected ‘struct rte_mempool *’ but argument is of type ‘const struct rte_mempool *’
 int rte_mempool_op_populate_default(struct rte_mempool *mp,
     ^
/home/ubuntu/83xx/dpdk/drivers/mempool/dpaa/dpaa_mempool.c: At top level:
/home/ubuntu/83xx/dpdk/drivers/mempool/dpaa/dpaa_mempool.c:303:14: error: initialization from incompatible pointer type [-Werror=incompatible-pointer-types]
  .populate = dpaa_populate,
              ^
/home/ubuntu/83xx/dpdk/drivers/mempool/dpaa/dpaa_mempool.c:303:14: note: (near initialization for ‘dpaa_mpool_ops.populate’)
cc1: all warnings being treated as errors

may be consider adding for dpaa platform..
diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c
index a2bbb392a..ce5050627 100644
--- a/drivers/mempool/dpaa/dpaa_mempool.c
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -263,8 +263,8 @@ dpaa_mbuf_get_count(const struct rte_mempool *mp)
        return bman_query_free_buffers(bp_info->bp);
 }
 
-static int
-dpaa_populate(const struct rte_mempool *mp, unsigned int max_objs,
+static int __rte_unused
+dpaa_populate(struct rte_mempool *mp, unsigned int max_objs,
              char *vaddr, rte_iova_t paddr, size_t len,
              rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
 {

Will share test and review feedback for octeontx platform soon.

[...]

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 0/9] mempool: prepare to add bucket driver
  2018-03-14 15:49     ` [PATCH v1 0/9] mempool: prepare to add bucket driver santosh
@ 2018-03-14 15:57       ` Andrew Rybchenko
  0 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-14 15:57 UTC (permalink / raw)
  To: santosh, dev; +Cc: Olivier MATZ, Jerin Jacob, Hemant Agrawal, Shreyansh Jain

Hi Santosh,

On 03/14/2018 06:49 PM, santosh wrote:
> Hi Andrew,
>
>
> On Saturday 10 March 2018 09:09 PM, Andrew Rybchenko wrote:
>
> [...]
>
>> RFCv1 -> RFCv2:
>>    - add driver ops to calculate required memory size and populate
>>      mempool objects, remove extra flags which were required before
>>      to control it
>>    - transition of octeontx and dpaa drivers to the new callbacks
>>    - change info API to get information from driver required to
>>      API user to know contiguous block size
>>    - remove get_capabilities (not required any more and may be
>>      substituted with more in info get API)
>>    - remove register_memory_area since it is substituted with
>>      populate callback which can do more
>>    - use SPDX tags
>>    - avoid all objects affinity to single lcore
>>    - fix bucket get_count
>>    - deprecate XMEM API
>>    - avoid introduction of a new function to flush cache
>>    - fix NO_CACHE_ALIGN case in bucket mempool
> I'm evaluating your series in octeontx platform.
> Noticed a build break for dpaa platform:
>    CC dpaa_mempool.o
> /home/ubuntu/83xx/dpdk/drivers/mempool/dpaa/dpaa_mempool.c: In function ‘dpaa_populate’:
> /home/ubuntu/83xx/dpdk/drivers/mempool/dpaa/dpaa_mempool.c:291:41: error: passing argument 1 of ‘rte_mempool_op_populate_default’ discards ‘const’ qualifier from pointer target type [-Werror=discarded-qualifiers]
>    return rte_mempool_op_populate_default(mp, max_objs, vaddr, paddr, len,
>                                           ^
> In file included from /home/ubuntu/83xx/dpdk/drivers/mempool/dpaa/dpaa_mempool.h:15:0,
>                   from /home/ubuntu/83xx/dpdk/drivers/mempool/dpaa/dpaa_mempool.c:28:
> /home/ubuntu/83xx/dpdk/build/include/rte_mempool.h:490:5: note: expected ‘struct rte_mempool *’ but argument is of type ‘const struct rte_mempool *’
>   int rte_mempool_op_populate_default(struct rte_mempool *mp,
>       ^
> /home/ubuntu/83xx/dpdk/drivers/mempool/dpaa/dpaa_mempool.c: At top level:
> /home/ubuntu/83xx/dpdk/drivers/mempool/dpaa/dpaa_mempool.c:303:14: error: initialization from incompatible pointer type [-Werror=incompatible-pointer-types]
>    .populate = dpaa_populate,
>                ^
> /home/ubuntu/83xx/dpdk/drivers/mempool/dpaa/dpaa_mempool.c:303:14: note: (near initialization for ‘dpaa_mpool_ops.populate’)
> cc1: all warnings being treated as errors

Yes, my bad, const should be simply removed to match prototype (and 
mempool is actually modified since it is populated). Will fix.

Many thank,
Andrew.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 3/9] mempool: remove callback to get capabilities
  2018-03-14 14:40       ` Burakov, Anatoly
@ 2018-03-14 16:12         ` Andrew Rybchenko
  2018-03-14 16:53           ` Burakov, Anatoly
  0 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-14 16:12 UTC (permalink / raw)
  To: Burakov, Anatoly, dev; +Cc: Olivier MATZ

On 03/14/2018 05:40 PM, Burakov, Anatoly wrote:
> On 10-Mar-18 3:39 PM, Andrew Rybchenko wrote:
>> The callback was introduced to let generic code to know octeontx
>> mempool driver requirements to use single physically contiguous
>> memory chunk to store all objects and align object address to
>> total object size. Now these requirements are met using a new
>> callbacks to calculate required memory chunk size and to populate
>> objects using provided memory chunk.
>>
>> These capability flags are not used anywhere else.
>>
>> Restricting capabilities to flags is not generic and likely to
>> be insufficient to describe mempool driver features. If required
>> in the future, API which returns structured information may be
>> added.
>>
>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>> ---
>
> Just a general comment - it is not enough to describe minimum memchunk 
> requirements. With memory hotplug patchset that's hopefully getting 
> merged in 18.05, memzones will no longer be guaranteed to be 
> IOVA-contiguous. So, if a driver requires its mempool to not only be 
> populated from a single memzone, but a single *physically contiguous* 
> memzone, going by only callbacks will not do, because whether or not 
> something should be a single memzone says nothing about whether this 
> memzone has to also be IOVA-contiguous.
>
> So i believe this needs to stay in one form or another.
>
> (also it would be nice to have a flag that a user could pass to 
> mempool_create that would force memzone reservation be 
> IOVA-contiguous, but that's a topic for another conversation. prime 
> user for this would be KNI.)

I think that min_chunk_size should be treated as IOVA-contiguous. So, we 
have 4 levels:
  - MEMPOOL_F_NO_PHYS_CONTIG (min_chunk_size == 0) -- IOVA-congtiguous 
is not required at all
  - no MEMPOOL_F_NO_PHYS_CONTIG (min_chunk_size == total_obj_size) -- 
object should be IOVA-contiguous
  - min_chunk_size > total_obj_size  -- group of objects should be 
IOVA-contiguous
  - min_chunk_size == <all-objects-size> -- all objects should be 
IOVA-contiguous

If so, how allocation should be implemented?
  1. if (min_chunk_size > min_page_size)
     a. try all contiguous
     b. if cannot, do by mem_chunk_size contiguous
  2. else allocate non-contiguous

--
Andrew.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 3/9] mempool: remove callback to get capabilities
  2018-03-14 16:12         ` Andrew Rybchenko
@ 2018-03-14 16:53           ` Burakov, Anatoly
  2018-03-14 17:24             ` Andrew Rybchenko
  0 siblings, 1 reply; 197+ messages in thread
From: Burakov, Anatoly @ 2018-03-14 16:53 UTC (permalink / raw)
  To: Andrew Rybchenko, dev; +Cc: Olivier MATZ

On 14-Mar-18 4:12 PM, Andrew Rybchenko wrote:
> On 03/14/2018 05:40 PM, Burakov, Anatoly wrote:
>> On 10-Mar-18 3:39 PM, Andrew Rybchenko wrote:
>>> The callback was introduced to let generic code to know octeontx
>>> mempool driver requirements to use single physically contiguous
>>> memory chunk to store all objects and align object address to
>>> total object size. Now these requirements are met using a new
>>> callbacks to calculate required memory chunk size and to populate
>>> objects using provided memory chunk.
>>>
>>> These capability flags are not used anywhere else.
>>>
>>> Restricting capabilities to flags is not generic and likely to
>>> be insufficient to describe mempool driver features. If required
>>> in the future, API which returns structured information may be
>>> added.
>>>
>>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>>> ---
>>
>> Just a general comment - it is not enough to describe minimum memchunk 
>> requirements. With memory hotplug patchset that's hopefully getting 
>> merged in 18.05, memzones will no longer be guaranteed to be 
>> IOVA-contiguous. So, if a driver requires its mempool to not only be 
>> populated from a single memzone, but a single *physically contiguous* 
>> memzone, going by only callbacks will not do, because whether or not 
>> something should be a single memzone says nothing about whether this 
>> memzone has to also be IOVA-contiguous.
>>
>> So i believe this needs to stay in one form or another.
>>
>> (also it would be nice to have a flag that a user could pass to 
>> mempool_create that would force memzone reservation be 
>> IOVA-contiguous, but that's a topic for another conversation. prime 
>> user for this would be KNI.)
> 
> I think that min_chunk_size should be treated as IOVA-contiguous.

Why? It's perfectly reasonable to e.g. implement a software mempool 
driver that would perform some optimizations due to all objects being in 
the same VA-contiguous memzone, yet not be dependent on underlying 
physical memory layout. These are two separate concerns IMO.

 > So, we
> have 4 levels:
>   - MEMPOOL_F_NO_PHYS_CONTIG (min_chunk_size == 0) -- IOVA-congtiguous 
> is not required at all
>   - no MEMPOOL_F_NO_PHYS_CONTIG (min_chunk_size == total_obj_size) -- 
> object should be IOVA-contiguous
>   - min_chunk_size > total_obj_size  -- group of objects should be 
> IOVA-contiguous
>   - min_chunk_size == <all-objects-size> -- all objects should be 
> IOVA-contiguous

I don't think this "automagic" decision on what should be 
IOVA-contiguous or not is the way to go. It needlessly complicates 
things, when all it takes is another flag passed to mempool allocator 
somewhere.

I'm not sure what is the best solution here. Perhaps another option 
would be to let mempool drivers allocate their memory as well? I.e. 
leave current behavior as default, as it's likely that it would be 
suitable for nearly all use cases, but provide another option to 
override memory allocation completely, so that e.g. octeontx could just 
do a memzone_reserve_contig() without regard for default allocation 
settings. I think this could be the cleanest solution.

> 
> If so, how allocation should be implemented?
>   1. if (min_chunk_size > min_page_size)
>      a. try all contiguous
>      b. if cannot, do by mem_chunk_size contiguous
>   2. else allocate non-contiguous
> 
> --
> Andrew.


-- 
Thanks,
Anatoly

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 3/9] mempool: remove callback to get capabilities
  2018-03-14 16:53           ` Burakov, Anatoly
@ 2018-03-14 17:24             ` Andrew Rybchenko
  2018-03-15  9:48               ` Burakov, Anatoly
  0 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-14 17:24 UTC (permalink / raw)
  To: Burakov, Anatoly, dev; +Cc: Olivier MATZ

On 03/14/2018 07:53 PM, Burakov, Anatoly wrote:
> On 14-Mar-18 4:12 PM, Andrew Rybchenko wrote:
>> On 03/14/2018 05:40 PM, Burakov, Anatoly wrote:
>>> On 10-Mar-18 3:39 PM, Andrew Rybchenko wrote:
>>>> The callback was introduced to let generic code to know octeontx
>>>> mempool driver requirements to use single physically contiguous
>>>> memory chunk to store all objects and align object address to
>>>> total object size. Now these requirements are met using a new
>>>> callbacks to calculate required memory chunk size and to populate
>>>> objects using provided memory chunk.
>>>>
>>>> These capability flags are not used anywhere else.
>>>>
>>>> Restricting capabilities to flags is not generic and likely to
>>>> be insufficient to describe mempool driver features. If required
>>>> in the future, API which returns structured information may be
>>>> added.
>>>>
>>>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>>>> ---
>>>
>>> Just a general comment - it is not enough to describe minimum 
>>> memchunk requirements. With memory hotplug patchset that's hopefully 
>>> getting merged in 18.05, memzones will no longer be guaranteed to be 
>>> IOVA-contiguous. So, if a driver requires its mempool to not only be 
>>> populated from a single memzone, but a single *physically 
>>> contiguous* memzone, going by only callbacks will not do, because 
>>> whether or not something should be a single memzone says nothing 
>>> about whether this memzone has to also be IOVA-contiguous.
>>>
>>> So i believe this needs to stay in one form or another.
>>>
>>> (also it would be nice to have a flag that a user could pass to 
>>> mempool_create that would force memzone reservation be 
>>> IOVA-contiguous, but that's a topic for another conversation. prime 
>>> user for this would be KNI.)
>>
>> I think that min_chunk_size should be treated as IOVA-contiguous.
>
> Why? It's perfectly reasonable to e.g. implement a software mempool 
> driver that would perform some optimizations due to all objects being 
> in the same VA-contiguous memzone, yet not be dependent on underlying 
> physical memory layout. These are two separate concerns IMO.

It looks like there is some misunderstanding here or I simply don't 
understand your point.
Above I mean that driver should be able to advertise its requirements on 
IOVA-contiguous regions.
If driver do not care about physical memory layout, no problem.

> > So, we
>> have 4 levels:
>>   - MEMPOOL_F_NO_PHYS_CONTIG (min_chunk_size == 0) -- 
>> IOVA-congtiguous is not required at all
>>   - no MEMPOOL_F_NO_PHYS_CONTIG (min_chunk_size == total_obj_size) -- 
>> object should be IOVA-contiguous
>>   - min_chunk_size > total_obj_size  -- group of objects should be 
>> IOVA-contiguous
>>   - min_chunk_size == <all-objects-size> -- all objects should be 
>> IOVA-contiguous
>
> I don't think this "automagic" decision on what should be 
> IOVA-contiguous or not is the way to go. It needlessly complicates 
> things, when all it takes is another flag passed to mempool allocator 
> somewhere.

No, it is not just one flag. We really need option (3) above: group of 
objects IOVA-contiguous in [1].
Of course, it is possible to use option (4) instead: everything 
IOVA-contigous, but I think it is bad - it may be very big and 
hard/impossible to allocate due to fragmentation.

> I'm not sure what is the best solution here. Perhaps another option 
> would be to let mempool drivers allocate their memory as well? I.e. 
> leave current behavior as default, as it's likely that it would be 
> suitable for nearly all use cases, but provide another option to 
> override memory allocation completely, so that e.g. octeontx could 
> just do a memzone_reserve_contig() without regard for default 
> allocation settings. I think this could be the cleanest solution.

For me it is hard to say. I don't know DPDK history good enough to say 
why there is a mempool API to populate objects on externally provided 
memory. If it may be removed, it is OK for me to do memory allocation 
inside rte_mempool or mempool drivers. Otherwise, if it is still allowed 
to allocate memory externally and pass it to mempool, it must be a way 
to express IOVA-contiguos requirements.

[1] https://dpdk.org/dev/patchwork/patch/34338/

>
>>
>> If so, how allocation should be implemented?
>>   1. if (min_chunk_size > min_page_size)
>>      a. try all contiguous
>>      b. if cannot, do by mem_chunk_size contiguous
>>   2. else allocate non-contiguous
>>
>> -- 
>> Andrew.
>
>

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 3/9] mempool: remove callback to get capabilities
  2018-03-14 17:24             ` Andrew Rybchenko
@ 2018-03-15  9:48               ` Burakov, Anatoly
  2018-03-15 11:49                 ` Andrew Rybchenko
  0 siblings, 1 reply; 197+ messages in thread
From: Burakov, Anatoly @ 2018-03-15  9:48 UTC (permalink / raw)
  To: Andrew Rybchenko, dev; +Cc: Olivier MATZ

On 14-Mar-18 5:24 PM, Andrew Rybchenko wrote:
> On 03/14/2018 07:53 PM, Burakov, Anatoly wrote:
>> On 14-Mar-18 4:12 PM, Andrew Rybchenko wrote:
>>> On 03/14/2018 05:40 PM, Burakov, Anatoly wrote:
>>>> On 10-Mar-18 3:39 PM, Andrew Rybchenko wrote:
>>>>> The callback was introduced to let generic code to know octeontx
>>>>> mempool driver requirements to use single physically contiguous
>>>>> memory chunk to store all objects and align object address to
>>>>> total object size. Now these requirements are met using a new
>>>>> callbacks to calculate required memory chunk size and to populate
>>>>> objects using provided memory chunk.
>>>>>
>>>>> These capability flags are not used anywhere else.
>>>>>
>>>>> Restricting capabilities to flags is not generic and likely to
>>>>> be insufficient to describe mempool driver features. If required
>>>>> in the future, API which returns structured information may be
>>>>> added.
>>>>>
>>>>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>>>>> ---
>>>>
>>>> Just a general comment - it is not enough to describe minimum 
>>>> memchunk requirements. With memory hotplug patchset that's hopefully 
>>>> getting merged in 18.05, memzones will no longer be guaranteed to be 
>>>> IOVA-contiguous. So, if a driver requires its mempool to not only be 
>>>> populated from a single memzone, but a single *physically 
>>>> contiguous* memzone, going by only callbacks will not do, because 
>>>> whether or not something should be a single memzone says nothing 
>>>> about whether this memzone has to also be IOVA-contiguous.
>>>>
>>>> So i believe this needs to stay in one form or another.
>>>>
>>>> (also it would be nice to have a flag that a user could pass to 
>>>> mempool_create that would force memzone reservation be 
>>>> IOVA-contiguous, but that's a topic for another conversation. prime 
>>>> user for this would be KNI.)
>>>
>>> I think that min_chunk_size should be treated as IOVA-contiguous.
>>
>> Why? It's perfectly reasonable to e.g. implement a software mempool 
>> driver that would perform some optimizations due to all objects being 
>> in the same VA-contiguous memzone, yet not be dependent on underlying 
>> physical memory layout. These are two separate concerns IMO.
> 
> It looks like there is some misunderstanding here or I simply don't 
> understand your point.
> Above I mean that driver should be able to advertise its requirements on 
> IOVA-contiguous regions.
> If driver do not care about physical memory layout, no problem.

Please correct me if i'm wrong, but my understanding was that you wanted 
to use min_chunk as a way to express minimum requirements for 
IOVA-contiguous memory. If i understood you correctly, i don't think 
that's the way to go because there could be valid use cases where a 
mempool driver would like to advertise min_chunk_size to be equal to its 
total size (i.e. allocate everything in a single memzone), yet not 
require that memzone to be IOVA-contiguous. I think these are two 
different concerns, and one does not, and should not imply the other.

> 
>> > So, we
>>> have 4 levels:
>>>   - MEMPOOL_F_NO_PHYS_CONTIG (min_chunk_size == 0) -- 
>>> IOVA-congtiguous is not required at all
>>>   - no MEMPOOL_F_NO_PHYS_CONTIG (min_chunk_size == total_obj_size) -- 
>>> object should be IOVA-contiguous
>>>   - min_chunk_size > total_obj_size  -- group of objects should be 
>>> IOVA-contiguous
>>>   - min_chunk_size == <all-objects-size> -- all objects should be 
>>> IOVA-contiguous
>>
>> I don't think this "automagic" decision on what should be 
>> IOVA-contiguous or not is the way to go. It needlessly complicates 
>> things, when all it takes is another flag passed to mempool allocator 
>> somewhere.
> 
> No, it is not just one flag. We really need option (3) above: group of 
> objects IOVA-contiguous in [1].
> Of course, it is possible to use option (4) instead: everything 
> IOVA-contigous, but I think it is bad - it may be very big and 
> hard/impossible to allocate due to fragmentation.
> 

Exactly: we shouldn't be forcing IOVA-contiguous memory just because 
mempool requrested a big min_chunk_size, nor do i think it is wise to 
encode such heuristics (referring to your 4 "levels" quoted above) into 
the mempool allocator.

>> I'm not sure what is the best solution here. Perhaps another option 
>> would be to let mempool drivers allocate their memory as well? I.e. 
>> leave current behavior as default, as it's likely that it would be 
>> suitable for nearly all use cases, but provide another option to 
>> override memory allocation completely, so that e.g. octeontx could 
>> just do a memzone_reserve_contig() without regard for default 
>> allocation settings. I think this could be the cleanest solution.
> 
> For me it is hard to say. I don't know DPDK history good enough to say 
> why there is a mempool API to populate objects on externally provided 
> memory. If it may be removed, it is OK for me to do memory allocation 
> inside rte_mempool or mempool drivers. Otherwise, if it is still allowed 
> to allocate memory externally and pass it to mempool, it must be a way 
> to express IOVA-contiguos requirements.
> 
> [1] https://dpdk.org/dev/patchwork/patch/34338/

Populating mempool objects is not the same as reserving memory where 
those objects would reside. The closest to "allocate memory externally" 
we have is rte_mempool_xmem_create(), which you are removing in this 
patchset.

> 
>>
>>>
>>> If so, how allocation should be implemented?
>>>   1. if (min_chunk_size > min_page_size)
>>>      a. try all contiguous
>>>      b. if cannot, do by mem_chunk_size contiguous
>>>   2. else allocate non-contiguous
>>>
>>> -- 
>>> Andrew.
>>
>>
> 


-- 
Thanks,
Anatoly

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 3/9] mempool: remove callback to get capabilities
  2018-03-15  9:48               ` Burakov, Anatoly
@ 2018-03-15 11:49                 ` Andrew Rybchenko
  2018-03-15 12:00                   ` Burakov, Anatoly
  0 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-15 11:49 UTC (permalink / raw)
  To: Burakov, Anatoly, dev; +Cc: Olivier MATZ

On 03/15/2018 12:48 PM, Burakov, Anatoly wrote:
> On 14-Mar-18 5:24 PM, Andrew Rybchenko wrote:
>> On 03/14/2018 07:53 PM, Burakov, Anatoly wrote:
>>> On 14-Mar-18 4:12 PM, Andrew Rybchenko wrote:
>>>> On 03/14/2018 05:40 PM, Burakov, Anatoly wrote:
>>>>> On 10-Mar-18 3:39 PM, Andrew Rybchenko wrote:
>>>>>> The callback was introduced to let generic code to know octeontx
>>>>>> mempool driver requirements to use single physically contiguous
>>>>>> memory chunk to store all objects and align object address to
>>>>>> total object size. Now these requirements are met using a new
>>>>>> callbacks to calculate required memory chunk size and to populate
>>>>>> objects using provided memory chunk.
>>>>>>
>>>>>> These capability flags are not used anywhere else.
>>>>>>
>>>>>> Restricting capabilities to flags is not generic and likely to
>>>>>> be insufficient to describe mempool driver features. If required
>>>>>> in the future, API which returns structured information may be
>>>>>> added.
>>>>>>
>>>>>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>>>>>> ---
>>>>>
>>>>> Just a general comment - it is not enough to describe minimum 
>>>>> memchunk requirements. With memory hotplug patchset that's 
>>>>> hopefully getting merged in 18.05, memzones will no longer be 
>>>>> guaranteed to be IOVA-contiguous. So, if a driver requires its 
>>>>> mempool to not only be populated from a single memzone, but a 
>>>>> single *physically contiguous* memzone, going by only callbacks 
>>>>> will not do, because whether or not something should be a single 
>>>>> memzone says nothing about whether this memzone has to also be 
>>>>> IOVA-contiguous.
>>>>>
>>>>> So i believe this needs to stay in one form or another.
>>>>>
>>>>> (also it would be nice to have a flag that a user could pass to 
>>>>> mempool_create that would force memzone reservation be 
>>>>> IOVA-contiguous, but that's a topic for another conversation. 
>>>>> prime user for this would be KNI.)
>>>>
>>>> I think that min_chunk_size should be treated as IOVA-contiguous.
>>>
>>> Why? It's perfectly reasonable to e.g. implement a software mempool 
>>> driver that would perform some optimizations due to all objects 
>>> being in the same VA-contiguous memzone, yet not be dependent on 
>>> underlying physical memory layout. These are two separate concerns IMO.
>>
>> It looks like there is some misunderstanding here or I simply don't 
>> understand your point.
>> Above I mean that driver should be able to advertise its requirements 
>> on IOVA-contiguous regions.
>> If driver do not care about physical memory layout, no problem.
>
> Please correct me if i'm wrong, but my understanding was that you 
> wanted to use min_chunk as a way to express minimum requirements for 
> IOVA-contiguous memory. If i understood you correctly, i don't think 
> that's the way to go because there could be valid use cases where a 
> mempool driver would like to advertise min_chunk_size to be equal to 
> its total size (i.e. allocate everything in a single memzone), yet not 
> require that memzone to be IOVA-contiguous. I think these are two 
> different concerns, and one does not, and should not imply the other.

Aha, you're saying that virtual-contiguous and IOVA-contiguous 
requirements are different things that it could be usecases where 
virtual contiguous is important but IOVA-contiguos is not required. It 
is perfectly fine.
As I understand IOVA-contiguous (physical) typically means 
virtual-contiguous as well. Requirements to have everything virtually 
contiguous and some blocks physically contiguous are unlikely. So, it 
may be reduced to either virtual or physical contiguous. If mempool does 
not care about physical contiguous at all, MEMPOOL_F_NO_PHYS_CONTIG flag 
should be used and min_chunk_size should mean virtual contiguous 
requirements. If mempool requires physical contiguous objects, there is 
*no* MEMPOOL_F_NO_PHYS_CONTIG flag and min_chunk_size means physical 
contiguous requirements.

>>
>>> > So, we
>>>> have 4 levels:
>>>>   - MEMPOOL_F_NO_PHYS_CONTIG (min_chunk_size == 0) -- 
>>>> IOVA-congtiguous is not required at all
>>>>   - no MEMPOOL_F_NO_PHYS_CONTIG (min_chunk_size == total_obj_size) 
>>>> -- object should be IOVA-contiguous
>>>>   - min_chunk_size > total_obj_size  -- group of objects should be 
>>>> IOVA-contiguous
>>>>   - min_chunk_size == <all-objects-size> -- all objects should be 
>>>> IOVA-contiguous
>>>
>>> I don't think this "automagic" decision on what should be 
>>> IOVA-contiguous or not is the way to go. It needlessly complicates 
>>> things, when all it takes is another flag passed to mempool 
>>> allocator somewhere.
>>
>> No, it is not just one flag. We really need option (3) above: group 
>> of objects IOVA-contiguous in [1].
>> Of course, it is possible to use option (4) instead: everything 
>> IOVA-contigous, but I think it is bad - it may be very big and 
>> hard/impossible to allocate due to fragmentation.
>>
>
> Exactly: we shouldn't be forcing IOVA-contiguous memory just because 
> mempool requrested a big min_chunk_size, nor do i think it is wise to 
> encode such heuristics (referring to your 4 "levels" quoted above) 
> into the mempool allocator.
>
>>> I'm not sure what is the best solution here. Perhaps another option 
>>> would be to let mempool drivers allocate their memory as well? I.e. 
>>> leave current behavior as default, as it's likely that it would be 
>>> suitable for nearly all use cases, but provide another option to 
>>> override memory allocation completely, so that e.g. octeontx could 
>>> just do a memzone_reserve_contig() without regard for default 
>>> allocation settings. I think this could be the cleanest solution.
>>
>> For me it is hard to say. I don't know DPDK history good enough to 
>> say why there is a mempool API to populate objects on externally 
>> provided memory. If it may be removed, it is OK for me to do memory 
>> allocation inside rte_mempool or mempool drivers. Otherwise, if it is 
>> still allowed to allocate memory externally and pass it to mempool, 
>> it must be a way to express IOVA-contiguos requirements.
>>
>> [1] https://dpdk.org/dev/patchwork/patch/34338/
>
> Populating mempool objects is not the same as reserving memory where 
> those objects would reside. The closest to "allocate memory 
> externally" we have is rte_mempool_xmem_create(), which you are 
> removing in this patchset.

It is not the only function. Other functions remain: 
rte_mempool_populate_iova, rte_mempool_populate_iova_tab, 
rte_mempool_populate_virt. These functions may be used to add mem areas 
to mempool to populate objects. So, the memory is allocated externally 
and external entity needs to know requirements on memory allocation: 
size and virtual or both virtual/physical contiguous.

>>
>>>
>>>>
>>>> If so, how allocation should be implemented?
>>>>   1. if (min_chunk_size > min_page_size)
>>>>      a. try all contiguous
>>>>      b. if cannot, do by mem_chunk_size contiguous
>>>>   2. else allocate non-contiguous
>>>>
>>>> -- 
>>>> Andrew.
>>>
>>>
>>
>
>

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 3/9] mempool: remove callback to get capabilities
  2018-03-15 11:49                 ` Andrew Rybchenko
@ 2018-03-15 12:00                   ` Burakov, Anatoly
  2018-03-15 12:44                     ` Andrew Rybchenko
  0 siblings, 1 reply; 197+ messages in thread
From: Burakov, Anatoly @ 2018-03-15 12:00 UTC (permalink / raw)
  To: Andrew Rybchenko, dev; +Cc: Olivier MATZ

On 15-Mar-18 11:49 AM, Andrew Rybchenko wrote:
> On 03/15/2018 12:48 PM, Burakov, Anatoly wrote:
>> On 14-Mar-18 5:24 PM, Andrew Rybchenko wrote:
>>> On 03/14/2018 07:53 PM, Burakov, Anatoly wrote:
>>>> On 14-Mar-18 4:12 PM, Andrew Rybchenko wrote:
>>>>> On 03/14/2018 05:40 PM, Burakov, Anatoly wrote:
>>>>>> On 10-Mar-18 3:39 PM, Andrew Rybchenko wrote:
>>>>>>> The callback was introduced to let generic code to know octeontx
>>>>>>> mempool driver requirements to use single physically contiguous
>>>>>>> memory chunk to store all objects and align object address to
>>>>>>> total object size. Now these requirements are met using a new
>>>>>>> callbacks to calculate required memory chunk size and to populate
>>>>>>> objects using provided memory chunk.
>>>>>>>
>>>>>>> These capability flags are not used anywhere else.
>>>>>>>
>>>>>>> Restricting capabilities to flags is not generic and likely to
>>>>>>> be insufficient to describe mempool driver features. If required
>>>>>>> in the future, API which returns structured information may be
>>>>>>> added.
>>>>>>>
>>>>>>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>>>>>>> ---
>>>>>>
>>>>>> Just a general comment - it is not enough to describe minimum 
>>>>>> memchunk requirements. With memory hotplug patchset that's 
>>>>>> hopefully getting merged in 18.05, memzones will no longer be 
>>>>>> guaranteed to be IOVA-contiguous. So, if a driver requires its 
>>>>>> mempool to not only be populated from a single memzone, but a 
>>>>>> single *physically contiguous* memzone, going by only callbacks 
>>>>>> will not do, because whether or not something should be a single 
>>>>>> memzone says nothing about whether this memzone has to also be 
>>>>>> IOVA-contiguous.
>>>>>>
>>>>>> So i believe this needs to stay in one form or another.
>>>>>>
>>>>>> (also it would be nice to have a flag that a user could pass to 
>>>>>> mempool_create that would force memzone reservation be 
>>>>>> IOVA-contiguous, but that's a topic for another conversation. 
>>>>>> prime user for this would be KNI.)
>>>>>
>>>>> I think that min_chunk_size should be treated as IOVA-contiguous.
>>>>
>>>> Why? It's perfectly reasonable to e.g. implement a software mempool 
>>>> driver that would perform some optimizations due to all objects 
>>>> being in the same VA-contiguous memzone, yet not be dependent on 
>>>> underlying physical memory layout. These are two separate concerns IMO.
>>>
>>> It looks like there is some misunderstanding here or I simply don't 
>>> understand your point.
>>> Above I mean that driver should be able to advertise its requirements 
>>> on IOVA-contiguous regions.
>>> If driver do not care about physical memory layout, no problem.
>>
>> Please correct me if i'm wrong, but my understanding was that you 
>> wanted to use min_chunk as a way to express minimum requirements for 
>> IOVA-contiguous memory. If i understood you correctly, i don't think 
>> that's the way to go because there could be valid use cases where a 
>> mempool driver would like to advertise min_chunk_size to be equal to 
>> its total size (i.e. allocate everything in a single memzone), yet not 
>> require that memzone to be IOVA-contiguous. I think these are two 
>> different concerns, and one does not, and should not imply the other.
> 
> Aha, you're saying that virtual-contiguous and IOVA-contiguous 
> requirements are different things that it could be usecases where 
> virtual contiguous is important but IOVA-contiguos is not required. It 
> is perfectly fine.
> As I understand IOVA-contiguous (physical) typically means 
> virtual-contiguous as well. Requirements to have everything virtually 
> contiguous and some blocks physically contiguous are unlikely. So, it 
> may be reduced to either virtual or physical contiguous. If mempool does 
> not care about physical contiguous at all, MEMPOOL_F_NO_PHYS_CONTIG flag 
> should be used and min_chunk_size should mean virtual contiguous 
> requirements. If mempool requires physical contiguous objects, there is 
> *no* MEMPOOL_F_NO_PHYS_CONTIG flag and min_chunk_size means physical 
> contiguous requirements.
> 

Fair point. I think we're in agreement now :) This will need to be 
documented then.

-- 
Thanks,
Anatoly

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 3/9] mempool: remove callback to get capabilities
  2018-03-15 12:00                   ` Burakov, Anatoly
@ 2018-03-15 12:44                     ` Andrew Rybchenko
  2018-03-19 17:05                       ` Olivier Matz
  0 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-15 12:44 UTC (permalink / raw)
  To: Burakov, Anatoly, dev; +Cc: Olivier MATZ

On 03/15/2018 03:00 PM, Burakov, Anatoly wrote:
> On 15-Mar-18 11:49 AM, Andrew Rybchenko wrote:
>> On 03/15/2018 12:48 PM, Burakov, Anatoly wrote:
>>> On 14-Mar-18 5:24 PM, Andrew Rybchenko wrote:
>>>> On 03/14/2018 07:53 PM, Burakov, Anatoly wrote:
>>>>> On 14-Mar-18 4:12 PM, Andrew Rybchenko wrote:
>>>>>> On 03/14/2018 05:40 PM, Burakov, Anatoly wrote:
>>>>>>> On 10-Mar-18 3:39 PM, Andrew Rybchenko wrote:
>>>>>>>> The callback was introduced to let generic code to know octeontx
>>>>>>>> mempool driver requirements to use single physically contiguous
>>>>>>>> memory chunk to store all objects and align object address to
>>>>>>>> total object size. Now these requirements are met using a new
>>>>>>>> callbacks to calculate required memory chunk size and to populate
>>>>>>>> objects using provided memory chunk.
>>>>>>>>
>>>>>>>> These capability flags are not used anywhere else.
>>>>>>>>
>>>>>>>> Restricting capabilities to flags is not generic and likely to
>>>>>>>> be insufficient to describe mempool driver features. If required
>>>>>>>> in the future, API which returns structured information may be
>>>>>>>> added.
>>>>>>>>
>>>>>>>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>>>>>>>> ---
>>>>>>>
>>>>>>> Just a general comment - it is not enough to describe minimum 
>>>>>>> memchunk requirements. With memory hotplug patchset that's 
>>>>>>> hopefully getting merged in 18.05, memzones will no longer be 
>>>>>>> guaranteed to be IOVA-contiguous. So, if a driver requires its 
>>>>>>> mempool to not only be populated from a single memzone, but a 
>>>>>>> single *physically contiguous* memzone, going by only callbacks 
>>>>>>> will not do, because whether or not something should be a single 
>>>>>>> memzone says nothing about whether this memzone has to also be 
>>>>>>> IOVA-contiguous.
>>>>>>>
>>>>>>> So i believe this needs to stay in one form or another.
>>>>>>>
>>>>>>> (also it would be nice to have a flag that a user could pass to 
>>>>>>> mempool_create that would force memzone reservation be 
>>>>>>> IOVA-contiguous, but that's a topic for another conversation. 
>>>>>>> prime user for this would be KNI.)
>>>>>>
>>>>>> I think that min_chunk_size should be treated as IOVA-contiguous.
>>>>>
>>>>> Why? It's perfectly reasonable to e.g. implement a software 
>>>>> mempool driver that would perform some optimizations due to all 
>>>>> objects being in the same VA-contiguous memzone, yet not be 
>>>>> dependent on underlying physical memory layout. These are two 
>>>>> separate concerns IMO.
>>>>
>>>> It looks like there is some misunderstanding here or I simply don't 
>>>> understand your point.
>>>> Above I mean that driver should be able to advertise its 
>>>> requirements on IOVA-contiguous regions.
>>>> If driver do not care about physical memory layout, no problem.
>>>
>>> Please correct me if i'm wrong, but my understanding was that you 
>>> wanted to use min_chunk as a way to express minimum requirements for 
>>> IOVA-contiguous memory. If i understood you correctly, i don't think 
>>> that's the way to go because there could be valid use cases where a 
>>> mempool driver would like to advertise min_chunk_size to be equal to 
>>> its total size (i.e. allocate everything in a single memzone), yet 
>>> not require that memzone to be IOVA-contiguous. I think these are 
>>> two different concerns, and one does not, and should not imply the 
>>> other.
>>
>> Aha, you're saying that virtual-contiguous and IOVA-contiguous 
>> requirements are different things that it could be usecases where 
>> virtual contiguous is important but IOVA-contiguos is not required. 
>> It is perfectly fine.
>> As I understand IOVA-contiguous (physical) typically means 
>> virtual-contiguous as well. Requirements to have everything virtually 
>> contiguous and some blocks physically contiguous are unlikely. So, it 
>> may be reduced to either virtual or physical contiguous. If mempool 
>> does not care about physical contiguous at all, 
>> MEMPOOL_F_NO_PHYS_CONTIG flag should be used and min_chunk_size 
>> should mean virtual contiguous requirements. If mempool requires 
>> physical contiguous objects, there is *no* MEMPOOL_F_NO_PHYS_CONTIG 
>> flag and min_chunk_size means physical contiguous requirements.
>>
>
> Fair point. I think we're in agreement now :) This will need to be 
> documented then.

OK, I'll do. I don't mind to rebase mine patch series on top of yours, 
but I'd like to do it a bit later when yours is closer to final version 
or even applied - it has really many prerequisites (pre-series) which 
should be collected first. It is really major changes.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 0/9] mempool: prepare to add bucket driver
  2018-03-10 15:39   ` [PATCH v1 0/9] mempool: prepare to add bucket driver Andrew Rybchenko
                       ` (9 preceding siblings ...)
  2018-03-14 15:49     ` [PATCH v1 0/9] mempool: prepare to add bucket driver santosh
@ 2018-03-19 17:03     ` Olivier Matz
  2018-03-20 10:09       ` Andrew Rybchenko
  10 siblings, 1 reply; 197+ messages in thread
From: Olivier Matz @ 2018-03-19 17:03 UTC (permalink / raw)
  To: Andrew Rybchenko
  Cc: dev, Santosh Shukla, Jerin Jacob, Hemant Agrawal, Shreyansh Jain

Hi Andrew,

Thank you for this nice rework.
Globally, the patchset looks good to me. I'm sending some comments
as reply to specific patches.

On Sat, Mar 10, 2018 at 03:39:33PM +0000, Andrew Rybchenko wrote:
> The initial patch series [1] is split into two to simplify processing.
> The second series relies on this one and will add bucket mempool driver
> and related ops.
> 
> The patch series has generic enhancements suggested by Olivier.
> Basically it adds driver callbacks to calculate required memory size and
> to populate objects using provided memory area. It allows to remove
> so-called capability flags used before to tell generic code how to
> allocate and slice allocated memory into mempool objects.
> Clean up which removes get_capabilities and register_memory_area is
> not strictly required, but I think right thing to do.
> Existing mempool drivers are updated.
> 
> I've kept rte_mempool_populate_iova_tab() intact since it seems to
> be not directly related XMEM API functions.

The function rte_mempool_populate_iova_tab() (actually, it was
rte_mempool_populate_phys_tab()) was introduced to support XMEM
API. In my opinion, it can also be deprecated.

> It breaks ABI since changes rte_mempool_ops. Also it removes
> rte_mempool_ops_register_memory_area() and
> rte_mempool_ops_get_capabilities() since corresponding callbacks are
> removed.
> 
> Internal global functions are not listed in map file since it is not
> a part of external API.
> 
> [1] http://dpdk.org/ml/archives/dev/2018-January/088698.html
> 
> RFCv1 -> RFCv2:
>   - add driver ops to calculate required memory size and populate
>     mempool objects, remove extra flags which were required before
>     to control it
>   - transition of octeontx and dpaa drivers to the new callbacks
>   - change info API to get information from driver required to
>     API user to know contiguous block size
>   - remove get_capabilities (not required any more and may be
>     substituted with more in info get API)
>   - remove register_memory_area since it is substituted with
>     populate callback which can do more
>   - use SPDX tags
>   - avoid all objects affinity to single lcore
>   - fix bucket get_count
>   - deprecate XMEM API
>   - avoid introduction of a new function to flush cache
>   - fix NO_CACHE_ALIGN case in bucket mempool
> 
> RFCv2 -> v1:
>   - split the series in two
>   - squash octeontx patches which implement calc_mem_size and populate
>     callbacks into the patch which removes get_capabilities since it is
>     the easiest way to untangle the tangle of tightly related library
>     functions and flags advertised by the driver
>   - consistently name default callbacks
>   - move default callbacks to dedicated file
>   - see detailed description in patches
> 
> Andrew Rybchenko (7):
>   mempool: add op to calculate memory size to be allocated
>   mempool: add op to populate objects using provided memory
>   mempool: remove callback to get capabilities
>   mempool: deprecate xmem functions
>   mempool/octeontx: prepare to remove register memory area op
>   mempool/dpaa: prepare to remove register memory area op
>   mempool: remove callback to register memory area
> 
> Artem V. Andreev (2):
>   mempool: ensure the mempool is initialized before populating
>   mempool: support flushing the default cache of the mempool
> 
>  doc/guides/rel_notes/deprecation.rst            |  12 +-
>  doc/guides/rel_notes/release_18_05.rst          |  32 ++-
>  drivers/mempool/dpaa/dpaa_mempool.c             |  13 +-
>  drivers/mempool/octeontx/rte_mempool_octeontx.c |  64 ++++--
>  lib/librte_mempool/Makefile                     |   3 +-
>  lib/librte_mempool/meson.build                  |   5 +-
>  lib/librte_mempool/rte_mempool.c                | 159 +++++++--------
>  lib/librte_mempool/rte_mempool.h                | 260 +++++++++++++++++-------
>  lib/librte_mempool/rte_mempool_ops.c            |  37 ++--
>  lib/librte_mempool/rte_mempool_ops_default.c    |  51 +++++
>  lib/librte_mempool/rte_mempool_version.map      |  11 +-
>  test/test/test_mempool.c                        |  31 ---
>  12 files changed, 437 insertions(+), 241 deletions(-)
>  create mode 100644 lib/librte_mempool/rte_mempool_ops_default.c
> 
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 1/9] mempool: add op to calculate memory size to be allocated
  2018-03-10 15:39     ` [PATCH v1 1/9] mempool: add op to calculate memory size to be allocated Andrew Rybchenko
  2018-03-11 12:51       ` santosh
@ 2018-03-19 17:03       ` Olivier Matz
  2018-03-20 10:29         ` Andrew Rybchenko
  2018-03-20 14:41         ` Bruce Richardson
  1 sibling, 2 replies; 197+ messages in thread
From: Olivier Matz @ 2018-03-19 17:03 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev, Bruce Richardson

On Sat, Mar 10, 2018 at 03:39:34PM +0000, Andrew Rybchenko wrote:
> Size of memory chunk required to populate mempool objects depends
> on how objects are stored in the memory. Different mempool drivers
> may have different requirements and a new operation allows to
> calculate memory size in accordance with driver requirements and
> advertise requirements on minimum memory chunk size and alignment
> in a generic way.
> 
> Bump ABI version since the patch breaks it.
> 
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>

Looks good to me. Just see below for few minor comments.

> ---
> RFCv2 -> v1:
>  - move default calc_mem_size callback to rte_mempool_ops_default.c
>  - add ABI changes to release notes
>  - name default callback consistently: rte_mempool_op_<callback>_default()
>  - bump ABI version since it is the first patch which breaks ABI
>  - describe default callback behaviour in details
>  - avoid introduction of internal function to cope with depration

typo (depration)

>    (keep it to deprecation patch)
>  - move cache-line or page boundary chunk alignment to default callback
>  - highlight that min_chunk_size and align parameters are output only

[...]

> --- a/lib/librte_mempool/Makefile
> +++ b/lib/librte_mempool/Makefile
> @@ -11,11 +11,12 @@ LDLIBS += -lrte_eal -lrte_ring
>  
>  EXPORT_MAP := rte_mempool_version.map
>  
> -LIBABIVER := 3
> +LIBABIVER := 4
>  
>  # all source are stored in SRCS-y
>  SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
>  SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops.c
> +SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops_default.c
>  # install includes
>  SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
>  
> diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build
> index 7a4f3da..9e3b527 100644
> --- a/lib/librte_mempool/meson.build
> +++ b/lib/librte_mempool/meson.build
> @@ -1,7 +1,8 @@
>  # SPDX-License-Identifier: BSD-3-Clause
>  # Copyright(c) 2017 Intel Corporation
>  
> -version = 2
> -sources = files('rte_mempool.c', 'rte_mempool_ops.c')
> +version = 4
> +sources = files('rte_mempool.c', 'rte_mempool_ops.c',
> +		'rte_mempool_ops_default.c')
>  headers = files('rte_mempool.h')
>  deps += ['ring']

It's strange to see that meson does not have the same
.so version than the legacy build system.

+CC Bruce in case he wants to fix this issue separately.

[...]

> --- a/lib/librte_mempool/rte_mempool_version.map
> +++ b/lib/librte_mempool/rte_mempool_version.map
> @@ -51,3 +51,11 @@ DPDK_17.11 {
>  	rte_mempool_populate_iova_tab;
>  
>  } DPDK_16.07;
> +
> +DPDK_18.05 {
> +	global:
> +
> +	rte_mempool_op_calc_mem_size_default;
> +
> +} DPDK_17.11;
> +

Another minor comment. When applying the patch with git am:

Applying: mempool: add op to calculate memory size to be allocated
.git/rebase-apply/patch:399: new blank line at EOF.
+
warning: 1 line adds whitespace errors.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 2/9] mempool: add op to populate objects using provided memory
  2018-03-10 15:39     ` [PATCH v1 2/9] mempool: add op to populate objects using provided memory Andrew Rybchenko
@ 2018-03-19 17:04       ` Olivier Matz
  2018-03-21  7:05         ` Andrew Rybchenko
  0 siblings, 1 reply; 197+ messages in thread
From: Olivier Matz @ 2018-03-19 17:04 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev

On Sat, Mar 10, 2018 at 03:39:35PM +0000, Andrew Rybchenko wrote:
> The callback allows to customize how objects are stored in the
> memory chunk. Default implementation of the callback which simply
> puts objects one by one is available.
> 
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>

[...]

> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -99,7 +99,8 @@ static unsigned optimize_object_size(unsigned obj_size)
>  }
>  
>  static void
> -mempool_add_elem(struct rte_mempool *mp, void *obj, rte_iova_t iova)
> +mempool_add_elem(struct rte_mempool *mp, __rte_unused void *opaque,
> +		 void *obj, rte_iova_t iova)
>  {
>  	struct rte_mempool_objhdr *hdr;
>  	struct rte_mempool_objtlr *tlr __rte_unused;
> @@ -116,9 +117,6 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, rte_iova_t iova)
>  	tlr = __mempool_get_trailer(obj);
>  	tlr->cookie = RTE_MEMPOOL_TRAILER_COOKIE;
>  #endif
> -
> -	/* enqueue in ring */
> -	rte_mempool_ops_enqueue_bulk(mp, &obj, 1);
>  }
>  
>  /* call obj_cb() for each mempool element */

Before this patch, the purpose of mempool_add_elem() was to add
an object into a mempool:
1- write object header and trailers
2- chain it into the list of objects
3- add it into the ring/stack/whatever (=enqueue)

Now, the enqueue is done in rte_mempool_op_populate_default() or will be
done in the driver. I'm not sure it's a good idea to separate 3- from
2-, because an object that is chained into the list is expected to be
in the ring/stack too.

This risk of mis-synchronization is also enforced by the fact that
ops->populate() can be provided by the driver and mempool_add_elem() is
passed as a callback pointer.

It's not clear to me why rte_mempool_ops_enqueue_bulk() is
removed from mempool_add_elem().



> @@ -396,16 +394,13 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>  	else
>  		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
>  
> -	while (off + total_elt_sz <= len && mp->populated_size < mp->size) {
> -		off += mp->header_size;
> -		if (iova == RTE_BAD_IOVA)
> -			mempool_add_elem(mp, (char *)vaddr + off,
> -				RTE_BAD_IOVA);
> -		else
> -			mempool_add_elem(mp, (char *)vaddr + off, iova + off);
> -		off += mp->elt_size + mp->trailer_size;
> -		i++;
> -	}
> +	if (off > len)
> +		return -EINVAL;

I think there is a memory leak here (memhdr), but it's my fault ;)
I introduced a similar code in commit 84121f1971:

      if (i == 0)
              return -EINVAL;

I can send a patch for it if you want.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 3/9] mempool: remove callback to get capabilities
  2018-03-15 12:44                     ` Andrew Rybchenko
@ 2018-03-19 17:05                       ` Olivier Matz
  0 siblings, 0 replies; 197+ messages in thread
From: Olivier Matz @ 2018-03-19 17:05 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: Burakov, Anatoly, dev

Hi,

On Thu, Mar 15, 2018 at 03:44:34PM +0300, Andrew Rybchenko wrote:

[...]

> > > Aha, you're saying that virtual-contiguous and IOVA-contiguous
> > > requirements are different things that it could be usecases where
> > > virtual contiguous is important but IOVA-contiguos is not required.
> > > It is perfectly fine.
> > > As I understand IOVA-contiguous (physical) typically means
> > > virtual-contiguous as well. Requirements to have everything
> > > virtually contiguous and some blocks physically contiguous are
> > > unlikely. So, it may be reduced to either virtual or physical
> > > contiguous. If mempool does not care about physical contiguous at
> > > all, MEMPOOL_F_NO_PHYS_CONTIG flag should be used and min_chunk_size
> > > should mean virtual contiguous requirements. If mempool requires
> > > physical contiguous objects, there is *no* MEMPOOL_F_NO_PHYS_CONTIG
> > > flag and min_chunk_size means physical contiguous requirements.

Just as a side note, from what I understood, having VA="contiguous" and
IOVA="don't care" would be helpful for mbuf pools with mellanox drivers
because perform better in that case.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 3/9] mempool: remove callback to get capabilities
  2018-03-10 15:39     ` [PATCH v1 3/9] mempool: remove callback to get capabilities Andrew Rybchenko
  2018-03-14 14:40       ` Burakov, Anatoly
@ 2018-03-19 17:06       ` Olivier Matz
  1 sibling, 0 replies; 197+ messages in thread
From: Olivier Matz @ 2018-03-19 17:06 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev

On Sat, Mar 10, 2018 at 03:39:36PM +0000, Andrew Rybchenko wrote:
> The callback was introduced to let generic code to know octeontx
> mempool driver requirements to use single physically contiguous
> memory chunk to store all objects and align object address to
> total object size. Now these requirements are met using a new
> callbacks to calculate required memory chunk size and to populate
> objects using provided memory chunk.
> 
> These capability flags are not used anywhere else.
> 
> Restricting capabilities to flags is not generic and likely to
> be insufficient to describe mempool driver features. If required
> in the future, API which returns structured information may be
> added.
> 
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>

Looks fine...


> --- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
> +++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
> @@ -126,14 +126,29 @@ octeontx_fpavf_get_count(const struct rte_mempool *mp)
>  	return octeontx_fpa_bufpool_free_count(pool);
>  }
>  
> -static int
> -octeontx_fpavf_get_capabilities(const struct rte_mempool *mp,
> -				unsigned int *flags)
> +static ssize_t
> +octeontx_fpavf_calc_mem_size(const struct rte_mempool *mp,
> +			     uint32_t obj_num, uint32_t pg_shift,
> +			     size_t *min_chunk_size, size_t *align)
>  {
> -	RTE_SET_USED(mp);
> -	*flags |= (MEMPOOL_F_CAPA_PHYS_CONTIG |
> -			MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS);
> -	return 0;
> +	ssize_t mem_size;
> +
> +	/*
> +	 * Simply need space for one more object to be able to
> +	 * fullfil alignment requirements.
> +	 */

...ah, just one typo:

  fullfil -> fulfil or fulfill

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 8/9] mempool: ensure the mempool is initialized before populating
  2018-03-10 15:39     ` [PATCH v1 8/9] mempool: ensure the mempool is initialized before populating Andrew Rybchenko
@ 2018-03-19 17:06       ` Olivier Matz
  2018-03-20 13:32         ` Andrew Rybchenko
  0 siblings, 1 reply; 197+ messages in thread
From: Olivier Matz @ 2018-03-19 17:06 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev, Artem V. Andreev

On Sat, Mar 10, 2018 at 03:39:41PM +0000, Andrew Rybchenko wrote:
> From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>
> 
> Callback to calculate required memory area size may require mempool
> driver data to be already allocated and initialized.
> 
> Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> ---
> RFCv2 -> v1:
>  - rename helper function as mempool_ops_alloc_once()
> 
>  lib/librte_mempool/rte_mempool.c | 29 ++++++++++++++++++++++-------
>  1 file changed, 22 insertions(+), 7 deletions(-)
> 
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index 844d907..12085cd 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -322,6 +322,21 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
>  	}
>  }
>  
> +static int
> +mempool_ops_alloc_once(struct rte_mempool *mp)
> +{
> +	int ret;
> +
> +	/* create the internal ring if not already done */
> +	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
> +		ret = rte_mempool_ops_alloc(mp);
> +		if (ret != 0)
> +			return ret;
> +		mp->flags |= MEMPOOL_F_POOL_CREATED;
> +	}
> +	return 0;
> +}
> +
>  /* Add objects in the pool, using a physically contiguous memory
>   * zone. Return the number of objects added, or a negative value
>   * on error.
> @@ -336,13 +351,9 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>  	struct rte_mempool_memhdr *memhdr;
>  	int ret;
>  
> -	/* create the internal ring if not already done */
> -	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
> -		ret = rte_mempool_ops_alloc(mp);
> -		if (ret != 0)
> -			return ret;
> -		mp->flags |= MEMPOOL_F_POOL_CREATED;
> -	}
> +	ret = mempool_ops_alloc_once(mp);
> +	if (ret != 0)
> +		return ret;
>  
>  	/* mempool is already populated */
>  	if (mp->populated_size >= mp->size)
> @@ -515,6 +526,10 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>  	unsigned mz_id, n;
>  	int ret;
>  
> +	ret = mempool_ops_alloc_once(mp);
> +	if (ret != 0)
> +		return ret;
> +
>  	/* mempool must not be populated */
>  	if (mp->nb_mem_chunks != 0)
>  		return -EEXIST;


Is there a reason why we need to add it in
rte_mempool_populate_default() but not in rte_mempool_populate_virt() and
rte_mempool_populate_iova_tab()?

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 0/9] mempool: prepare to add bucket driver
  2018-03-19 17:03     ` Olivier Matz
@ 2018-03-20 10:09       ` Andrew Rybchenko
  2018-03-20 11:04         ` Thomas Monjalon
  0 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-20 10:09 UTC (permalink / raw)
  To: Olivier Matz
  Cc: dev, Santosh Shukla, Jerin Jacob, Hemant Agrawal, Shreyansh Jain,
	Thomas Monjalon

On 03/19/2018 08:03 PM, Olivier Matz wrote:
>> I've kept rte_mempool_populate_iova_tab() intact since it seems to
>> be not directly related XMEM API functions.
> The function rte_mempool_populate_iova_tab() (actually, it was
> rte_mempool_populate_phys_tab()) was introduced to support XMEM
> API. In my opinion, it can also be deprecated.

CC Thomas

Definitely OK for me. It is not listed in deprecation notice included in 
18.02,
but I think it is OK to deprecate it in 18.05 (since we're not removing,
but just deprecating it).

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 1/9] mempool: add op to calculate memory size to be allocated
  2018-03-19 17:03       ` Olivier Matz
@ 2018-03-20 10:29         ` Andrew Rybchenko
  2018-03-20 14:41         ` Bruce Richardson
  1 sibling, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-20 10:29 UTC (permalink / raw)
  To: Olivier Matz; +Cc: dev, Bruce Richardson

On 03/19/2018 08:03 PM, Olivier Matz wrote:
> On Sat, Mar 10, 2018 at 03:39:34PM +0000, Andrew Rybchenko wrote:
>> --- a/lib/librte_mempool/Makefile
>> +++ b/lib/librte_mempool/Makefile
>> @@ -11,11 +11,12 @@ LDLIBS += -lrte_eal -lrte_ring
>>   
>>   EXPORT_MAP := rte_mempool_version.map
>>   
>> -LIBABIVER := 3
>> +LIBABIVER := 4
>>   
>>   # all source are stored in SRCS-y
>>   SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
>>   SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops.c
>> +SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops_default.c
>>   # install includes
>>   SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
>>   
>> diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build
>> index 7a4f3da..9e3b527 100644
>> --- a/lib/librte_mempool/meson.build
>> +++ b/lib/librte_mempool/meson.build
>> @@ -1,7 +1,8 @@
>>   # SPDX-License-Identifier: BSD-3-Clause
>>   # Copyright(c) 2017 Intel Corporation
>>   
>> -version = 2
>> -sources = files('rte_mempool.c', 'rte_mempool_ops.c')
>> +version = 4
>> +sources = files('rte_mempool.c', 'rte_mempool_ops.c',
>> +		'rte_mempool_ops_default.c')
>>   headers = files('rte_mempool.h')
>>   deps += ['ring']
> It's strange to see that meson does not have the same
> .so version than the legacy build system.
>
> +CC Bruce in case he wants to fix this issue separately.

I'll make a patchset to fix all similar issues. It should be definitely 
separate
since it should be backported to 18.02.

I think main problem here is the version=1 default in the case of meson.
So, there are really many examples w/o version and it is simply 
lost/forgotten
when a new library is added to meson.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 0/9] mempool: prepare to add bucket driver
  2018-03-20 10:09       ` Andrew Rybchenko
@ 2018-03-20 11:04         ` Thomas Monjalon
  0 siblings, 0 replies; 197+ messages in thread
From: Thomas Monjalon @ 2018-03-20 11:04 UTC (permalink / raw)
  To: Andrew Rybchenko
  Cc: Olivier Matz, dev, Santosh Shukla, Jerin Jacob, Hemant Agrawal,
	Shreyansh Jain

20/03/2018 11:09, Andrew Rybchenko:
> On 03/19/2018 08:03 PM, Olivier Matz wrote:
> >> I've kept rte_mempool_populate_iova_tab() intact since it seems to
> >> be not directly related XMEM API functions.
> > The function rte_mempool_populate_iova_tab() (actually, it was
> > rte_mempool_populate_phys_tab()) was introduced to support XMEM
> > API. In my opinion, it can also be deprecated.
> 
> CC Thomas
> 
> Definitely OK for me. It is not listed in deprecation notice included in 
> 18.02,
> but I think it is OK to deprecate it in 18.05 (since we're not removing,
> but just deprecating it).

Yes it is OK to deprecate this function in addition to other mempool ones
already listed as planned to be deprecated.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 8/9] mempool: ensure the mempool is initialized before populating
  2018-03-19 17:06       ` Olivier Matz
@ 2018-03-20 13:32         ` Andrew Rybchenko
  2018-03-20 16:57           ` Olivier Matz
  0 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-20 13:32 UTC (permalink / raw)
  To: Olivier Matz; +Cc: dev, Artem V. Andreev

On 03/19/2018 08:06 PM, Olivier Matz wrote:
> On Sat, Mar 10, 2018 at 03:39:41PM +0000, Andrew Rybchenko wrote:
>> From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>
>>
>> Callback to calculate required memory area size may require mempool
>> driver data to be already allocated and initialized.
>>
>> Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>> ---
>> RFCv2 -> v1:
>>   - rename helper function as mempool_ops_alloc_once()
>>
>>   lib/librte_mempool/rte_mempool.c | 29 ++++++++++++++++++++++-------
>>   1 file changed, 22 insertions(+), 7 deletions(-)
>>
>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
>> index 844d907..12085cd 100644
>> --- a/lib/librte_mempool/rte_mempool.c
>> +++ b/lib/librte_mempool/rte_mempool.c
>> @@ -322,6 +322,21 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
>>   	}
>>   }
>>   
>> +static int
>> +mempool_ops_alloc_once(struct rte_mempool *mp)
>> +{
>> +	int ret;
>> +
>> +	/* create the internal ring if not already done */
>> +	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
>> +		ret = rte_mempool_ops_alloc(mp);
>> +		if (ret != 0)
>> +			return ret;
>> +		mp->flags |= MEMPOOL_F_POOL_CREATED;
>> +	}
>> +	return 0;
>> +}
>> +
>>   /* Add objects in the pool, using a physically contiguous memory
>>    * zone. Return the number of objects added, or a negative value
>>    * on error.
>> @@ -336,13 +351,9 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>>   	struct rte_mempool_memhdr *memhdr;
>>   	int ret;
>>   
>> -	/* create the internal ring if not already done */
>> -	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
>> -		ret = rte_mempool_ops_alloc(mp);
>> -		if (ret != 0)
>> -			return ret;
>> -		mp->flags |= MEMPOOL_F_POOL_CREATED;
>> -	}
>> +	ret = mempool_ops_alloc_once(mp);
>> +	if (ret != 0)
>> +		return ret;
>>   
>>   	/* mempool is already populated */
>>   	if (mp->populated_size >= mp->size)
>> @@ -515,6 +526,10 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>>   	unsigned mz_id, n;
>>   	int ret;
>>   
>> +	ret = mempool_ops_alloc_once(mp);
>> +	if (ret != 0)
>> +		return ret;
>> +
>>   	/* mempool must not be populated */
>>   	if (mp->nb_mem_chunks != 0)
>>   		return -EEXIST;
>
> Is there a reason why we need to add it in
> rte_mempool_populate_default() but not in rte_mempool_populate_virt() and
> rte_mempool_populate_iova_tab()?

The reason is rte_mempool_ops_calc_mem_size() call
from rte_mempool_populate_default(). rte_mempool_ops_*() are not
called directly from rte_mempool_populate_virt() and
rte_mempool_populate_iova_tab().

In fact I've found out that rte_mempool_ops_calc_mem_size() is called
from get_anon_size() which is called from rte_mempool_populate_anon().
So, we need to add to get_anon_size() as well.

May be it is even better to make the patch the first in series to make
sure that it is already OK when rte_mempool_ops_calc_mem_size()
is added. What do you think?

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 1/9] mempool: add op to calculate memory size to be allocated
  2018-03-19 17:03       ` Olivier Matz
  2018-03-20 10:29         ` Andrew Rybchenko
@ 2018-03-20 14:41         ` Bruce Richardson
  1 sibling, 0 replies; 197+ messages in thread
From: Bruce Richardson @ 2018-03-20 14:41 UTC (permalink / raw)
  To: Olivier Matz; +Cc: Andrew Rybchenko, dev

On Mon, Mar 19, 2018 at 06:03:52PM +0100, Olivier Matz wrote:
> On Sat, Mar 10, 2018 at 03:39:34PM +0000, Andrew Rybchenko wrote:
> > Size of memory chunk required to populate mempool objects depends
> > on how objects are stored in the memory. Different mempool drivers
> > may have different requirements and a new operation allows to
> > calculate memory size in accordance with driver requirements and
> > advertise requirements on minimum memory chunk size and alignment
> > in a generic way.
> > 
> > Bump ABI version since the patch breaks it.
> > 
> > Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> 
> Looks good to me. Just see below for few minor comments.
> 
> > ---
> > RFCv2 -> v1:
> >  - move default calc_mem_size callback to rte_mempool_ops_default.c
> >  - add ABI changes to release notes
> >  - name default callback consistently: rte_mempool_op_<callback>_default()
> >  - bump ABI version since it is the first patch which breaks ABI
> >  - describe default callback behaviour in details
> >  - avoid introduction of internal function to cope with depration
> 
> typo (depration)
> 
> >    (keep it to deprecation patch)
> >  - move cache-line or page boundary chunk alignment to default callback
> >  - highlight that min_chunk_size and align parameters are output only
> 
> [...]
> 
> > --- a/lib/librte_mempool/Makefile
> > +++ b/lib/librte_mempool/Makefile
> > @@ -11,11 +11,12 @@ LDLIBS += -lrte_eal -lrte_ring
> >  
> >  EXPORT_MAP := rte_mempool_version.map
> >  
> > -LIBABIVER := 3
> > +LIBABIVER := 4
> >  
> >  # all source are stored in SRCS-y
> >  SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
> >  SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops.c
> > +SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops_default.c
> >  # install includes
> >  SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
> >  
> > diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build
> > index 7a4f3da..9e3b527 100644
> > --- a/lib/librte_mempool/meson.build
> > +++ b/lib/librte_mempool/meson.build
> > @@ -1,7 +1,8 @@
> >  # SPDX-License-Identifier: BSD-3-Clause
> >  # Copyright(c) 2017 Intel Corporation
> >  
> > -version = 2
> > -sources = files('rte_mempool.c', 'rte_mempool_ops.c')
> > +version = 4
> > +sources = files('rte_mempool.c', 'rte_mempool_ops.c',
> > +		'rte_mempool_ops_default.c')
> >  headers = files('rte_mempool.h')
> >  deps += ['ring']
> 
> It's strange to see that meson does not have the same
> .so version than the legacy build system.
> 
> +CC Bruce in case he wants to fix this issue separately.
>
The so version drift occurred during the development of the next-build
tree, sadly. While initially all version were correct, as the patches
flowed into mainline I wasn't able to keep up with all the version changed.
:-(
Since nobody is actually using meson for packaging (yet), I'm not sure this
is critical, so I don't mind whether it's fixed in a separate patch or not.

/Bruce

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 8/9] mempool: ensure the mempool is initialized before populating
  2018-03-20 13:32         ` Andrew Rybchenko
@ 2018-03-20 16:57           ` Olivier Matz
  0 siblings, 0 replies; 197+ messages in thread
From: Olivier Matz @ 2018-03-20 16:57 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev, Artem V. Andreev

On Tue, Mar 20, 2018 at 04:32:04PM +0300, Andrew Rybchenko wrote:
> On 03/19/2018 08:06 PM, Olivier Matz wrote:
> > On Sat, Mar 10, 2018 at 03:39:41PM +0000, Andrew Rybchenko wrote:
> > > From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>
> > > 
> > > Callback to calculate required memory area size may require mempool
> > > driver data to be already allocated and initialized.
> > > 
> > > Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
> > > Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> > > ---
> > > RFCv2 -> v1:
> > >   - rename helper function as mempool_ops_alloc_once()
> > > 
> > >   lib/librte_mempool/rte_mempool.c | 29 ++++++++++++++++++++++-------
> > >   1 file changed, 22 insertions(+), 7 deletions(-)
> > > 
> > > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> > > index 844d907..12085cd 100644
> > > --- a/lib/librte_mempool/rte_mempool.c
> > > +++ b/lib/librte_mempool/rte_mempool.c
> > > @@ -322,6 +322,21 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
> > >   	}
> > >   }
> > > +static int
> > > +mempool_ops_alloc_once(struct rte_mempool *mp)
> > > +{
> > > +	int ret;
> > > +
> > > +	/* create the internal ring if not already done */
> > > +	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
> > > +		ret = rte_mempool_ops_alloc(mp);
> > > +		if (ret != 0)
> > > +			return ret;
> > > +		mp->flags |= MEMPOOL_F_POOL_CREATED;
> > > +	}
> > > +	return 0;
> > > +}
> > > +
> > >   /* Add objects in the pool, using a physically contiguous memory
> > >    * zone. Return the number of objects added, or a negative value
> > >    * on error.
> > > @@ -336,13 +351,9 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
> > >   	struct rte_mempool_memhdr *memhdr;
> > >   	int ret;
> > > -	/* create the internal ring if not already done */
> > > -	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
> > > -		ret = rte_mempool_ops_alloc(mp);
> > > -		if (ret != 0)
> > > -			return ret;
> > > -		mp->flags |= MEMPOOL_F_POOL_CREATED;
> > > -	}
> > > +	ret = mempool_ops_alloc_once(mp);
> > > +	if (ret != 0)
> > > +		return ret;
> > >   	/* mempool is already populated */
> > >   	if (mp->populated_size >= mp->size)
> > > @@ -515,6 +526,10 @@ rte_mempool_populate_default(struct rte_mempool *mp)
> > >   	unsigned mz_id, n;
> > >   	int ret;
> > > +	ret = mempool_ops_alloc_once(mp);
> > > +	if (ret != 0)
> > > +		return ret;
> > > +
> > >   	/* mempool must not be populated */
> > >   	if (mp->nb_mem_chunks != 0)
> > >   		return -EEXIST;
> > 
> > Is there a reason why we need to add it in
> > rte_mempool_populate_default() but not in rte_mempool_populate_virt() and
> > rte_mempool_populate_iova_tab()?
> 
> The reason is rte_mempool_ops_calc_mem_size() call
> from rte_mempool_populate_default(). rte_mempool_ops_*() are not
> called directly from rte_mempool_populate_virt() and
> rte_mempool_populate_iova_tab().
> 
> In fact I've found out that rte_mempool_ops_calc_mem_size() is called
> from get_anon_size() which is called from rte_mempool_populate_anon().
> So, we need to add to get_anon_size() as well.
> 
> May be it is even better to make the patch the first in series to make
> sure that it is already OK when rte_mempool_ops_calc_mem_size()
> is added. What do you think?

Yes, sounds good.


Olivier

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 2/9] mempool: add op to populate objects using provided memory
  2018-03-19 17:04       ` Olivier Matz
@ 2018-03-21  7:05         ` Andrew Rybchenko
  0 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-21  7:05 UTC (permalink / raw)
  To: Olivier Matz; +Cc: dev

On 03/19/2018 08:04 PM, Olivier Matz wrote:
> On Sat, Mar 10, 2018 at 03:39:35PM +0000, Andrew Rybchenko wrote:
>> The callback allows to customize how objects are stored in the
>> memory chunk. Default implementation of the callback which simply
>> puts objects one by one is available.
>>
>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> [...]
>
>> --- a/lib/librte_mempool/rte_mempool.c
>> +++ b/lib/librte_mempool/rte_mempool.c
>> @@ -99,7 +99,8 @@ static unsigned optimize_object_size(unsigned obj_size)
>>   }
>>   
>>   static void
>> -mempool_add_elem(struct rte_mempool *mp, void *obj, rte_iova_t iova)
>> +mempool_add_elem(struct rte_mempool *mp, __rte_unused void *opaque,
>> +		 void *obj, rte_iova_t iova)
>>   {
>>   	struct rte_mempool_objhdr *hdr;
>>   	struct rte_mempool_objtlr *tlr __rte_unused;
>> @@ -116,9 +117,6 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, rte_iova_t iova)
>>   	tlr = __mempool_get_trailer(obj);
>>   	tlr->cookie = RTE_MEMPOOL_TRAILER_COOKIE;
>>   #endif
>> -
>> -	/* enqueue in ring */
>> -	rte_mempool_ops_enqueue_bulk(mp, &obj, 1);
>>   }
>>   
>>   /* call obj_cb() for each mempool element */
> Before this patch, the purpose of mempool_add_elem() was to add
> an object into a mempool:
> 1- write object header and trailers
> 2- chain it into the list of objects
> 3- add it into the ring/stack/whatever (=enqueue)
>
> Now, the enqueue is done in rte_mempool_op_populate_default() or will be
> done in the driver. I'm not sure it's a good idea to separate 3- from
> 2-, because an object that is chained into the list is expected to be
> in the ring/stack too.

When an object is dequeued, it is still chained into the list, but not in
the ring/stack. Separation is to use callback for generic mempool
housekeeping. Enqueue is a driver-specific operation.

> This risk of mis-synchronization is also enforced by the fact that
> ops->populate() can be provided by the driver and mempool_add_elem() is
> passed as a callback pointer.
>
> It's not clear to me why rte_mempool_ops_enqueue_bulk() is
> removed from mempool_add_elem().

The idea was that it could be more efficient (and probably the only way)
to enqueue the first time inside the driver. In theory bucket mempool
could init and enqueue full buckets instead of objects one-by-one.
However, finally it appears to be easier to reuse default populate
callback and enqueue operation.
So, now I have no strong opinion and agree with your arguments,
that's why I've tried to highlight it rte_mempool_populate_t description.
Even explicit description does not always help...
So, should I return enqueue to callback or leave as is in my patches?

>> @@ -396,16 +394,13 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>>   	else
>>   		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
>>   
>> -	while (off + total_elt_sz <= len && mp->populated_size < mp->size) {
>> -		off += mp->header_size;
>> -		if (iova == RTE_BAD_IOVA)
>> -			mempool_add_elem(mp, (char *)vaddr + off,
>> -				RTE_BAD_IOVA);
>> -		else
>> -			mempool_add_elem(mp, (char *)vaddr + off, iova + off);
>> -		off += mp->elt_size + mp->trailer_size;
>> -		i++;
>> -	}
>> +	if (off > len)
>> +		return -EINVAL;
> I think there is a memory leak here (memhdr), but it's my fault ;)
> I introduced a similar code in commit 84121f1971:
>
>        if (i == 0)
>                return -EINVAL;
>
> I can send a patch for it if you want.

This one is yours, above is mine :)
Don't worry, I'll submit separate pre-patch to fix it with appropriate 
Fixes and Cc.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* [PATCH v2 00/11] mempool: prepare to add bucket driver
  2018-01-23 13:15 ` [RFC v2 00/17] " Andrew Rybchenko
                     ` (18 preceding siblings ...)
  2018-03-10 15:39   ` [PATCH v1 0/9] mempool: prepare to add bucket driver Andrew Rybchenko
@ 2018-03-25 16:20   ` Andrew Rybchenko
  2018-03-25 16:20     ` [PATCH v2 01/11] mempool: fix memhdr leak when no objects are populated Andrew Rybchenko
                       ` (10 more replies)
  2018-03-26 16:09   ` [PATCH v3 00/11] mempool: prepare to add bucket driver Andrew Rybchenko
  2018-03-26 16:12   ` [PATCH v1 0/6] mempool: add bucket driver Andrew Rybchenko
  21 siblings, 11 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-25 16:20 UTC (permalink / raw)
  To: dev
  Cc: Olivier MATZ, Thomas Monjalon, Anatoly Burakov, Santosh Shukla,
	Jerin Jacob, Hemant Agrawal, Shreyansh Jain

The patch series should be applied on top of [7].

The initial patch series [1] is split into two to simplify processing.
The second series relies on this one and will add bucket mempool driver
and related ops.

The patch series has generic enhancements suggested by Olivier.
Basically it adds driver callbacks to calculate required memory size and
to populate objects using provided memory area. It allows to remove
so-called capability flags used before to tell generic code how to
allocate and slice allocated memory into mempool objects.
Clean up which removes get_capabilities and register_memory_area is
not strictly required, but I think right thing to do.
Existing mempool drivers are updated.

rte_mempool_populate_iova_tab() is also deprecated in v2 as agreed in [2].
Unfortunately it requires addition of -Wno-deprecated-declarations flag
to librte_mempool since the function is used by deprecated earlier
rte_mempool_populate_phys_tab(). If the later may be removed in the
release, we can avoid addition of the flag to allow usage of deprecated
functions.

One open question remains from previous review [3].

The patch series interfere with memory hotplug for DPDK [4] ([5] to be
precise). So, rebase may be required.

A new patch is added to the series to rename MEMPOOL_F_NO_PHYS_CONTIG
as MEMPOOL_F_NO_IOVA_CONTIG as agreed in [6].
MEMPOOL_F_CAPA_PHYS_CONTIG is not renamed since it removed in this
patchset.

It breaks ABI since changes rte_mempool_ops. Also it removes
rte_mempool_ops_register_memory_area() and
rte_mempool_ops_get_capabilities() since corresponding callbacks are
removed.

Internal global functions are not listed in map file since it is not
a part of external API.

[1] https://dpdk.org/ml/archives/dev/2018-January/088698.html
[2] https://dpdk.org/ml/archives/dev/2018-March/093186.html
[3] https://dpdk.org/ml/archives/dev/2018-March/093329.html
[4] https://dpdk.org/ml/archives/dev/2018-March/092070.html
[5] https://dpdk.org/ml/archives/dev/2018-March/092088.html
[6] https://dpdk.org/ml/archives/dev/2018-March/093345.html
[7] https://dpdk.org/ml/archives/dev/2018-March/093196.html

v1 -> v2:
  - deprecate rte_mempool_populate_iova_tab()
  - add patch to fix memory leak if no objects are populated
  - add patch to rename MEMPOOL_F_NO_PHYS_CONTIG
  - minor fixes (typos, blank line at the end of file)
  - highlight meaning of min_chunk_size (when it is virtual or
    physical contiguous)
  - make sure that mempool is initialized in rte_mempool_populate_anon()
  - move patch to ensure that mempool is initialized earlier in the series

RFCv2 -> v1:
  - split the series in two
  - squash octeontx patches which implement calc_mem_size and populate
    callbacks into the patch which removes get_capabilities since it is
    the easiest way to untangle the tangle of tightly related library
    functions and flags advertised by the driver
  - consistently name default callbacks
  - move default callbacks to dedicated file
  - see detailed description in patches

RFCv1 -> RFCv2:
  - add driver ops to calculate required memory size and populate
    mempool objects, remove extra flags which were required before
    to control it
  - transition of octeontx and dpaa drivers to the new callbacks
  - change info API to get information from driver required to
    API user to know contiguous block size
  - remove get_capabilities (not required any more and may be
    substituted with more in info get API)
  - remove register_memory_area since it is substituted with
    populate callback which can do more
  - use SPDX tags
  - avoid all objects affinity to single lcore
  - fix bucket get_count
  - deprecate XMEM API
  - avoid introduction of a new function to flush cache
  - fix NO_CACHE_ALIGN case in bucket mempool


Andrew Rybchenko (9):
  mempool: fix memhdr leak when no objects are populated
  mempool: rename flag to control IOVA-contiguous objects
  mempool: add op to calculate memory size to be allocated
  mempool: add op to populate objects using provided memory
  mempool: remove callback to get capabilities
  mempool: deprecate xmem functions
  mempool/octeontx: prepare to remove register memory area op
  mempool/dpaa: prepare to remove register memory area op
  mempool: remove callback to register memory area

Artem V. Andreev (2):
  mempool: ensure the mempool is initialized before populating
  mempool: support flushing the default cache of the mempool

 doc/guides/rel_notes/deprecation.rst            |  12 +-
 doc/guides/rel_notes/release_18_05.rst          |  33 ++-
 drivers/mempool/dpaa/dpaa_mempool.c             |  13 +-
 drivers/mempool/octeontx/rte_mempool_octeontx.c |  64 ++++--
 drivers/net/thunderx/nicvf_ethdev.c             |   2 +-
 lib/librte_mempool/Makefile                     |   6 +-
 lib/librte_mempool/meson.build                  |  17 +-
 lib/librte_mempool/rte_mempool.c                | 179 ++++++++-------
 lib/librte_mempool/rte_mempool.h                | 280 +++++++++++++++++-------
 lib/librte_mempool/rte_mempool_ops.c            |  37 ++--
 lib/librte_mempool/rte_mempool_ops_default.c    |  51 +++++
 lib/librte_mempool/rte_mempool_version.map      |  10 +-
 test/test/test_mempool.c                        |  31 ---
 13 files changed, 485 insertions(+), 250 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_ops_default.c

-- 
2.7.4

^ permalink raw reply	[flat|nested] 197+ messages in thread

* [PATCH v2 01/11] mempool: fix memhdr leak when no objects are populated
  2018-03-25 16:20   ` [PATCH v2 00/11] " Andrew Rybchenko
@ 2018-03-25 16:20     ` Andrew Rybchenko
  2018-03-25 16:20     ` [PATCH v2 02/11] mempool: rename flag to control IOVA-contiguous objects Andrew Rybchenko
                       ` (9 subsequent siblings)
  10 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-25 16:20 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, stable

Fixes: 84121f197187 ("mempool: store memory chunks in a list")
Cc: stable@dpdk.org

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
v1 -> v2:
 - added in v2 as discussed in [1]

[1] https://dpdk.org/ml/archives/dev/2018-March/093329.html

 lib/librte_mempool/rte_mempool.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 54f7f4b..80bf941 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -408,12 +408,18 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	}
 
 	/* not enough room to store one object */
-	if (i == 0)
-		return -EINVAL;
+	if (i == 0) {
+		ret = -EINVAL;
+		goto fail;
+	}
 
 	STAILQ_INSERT_TAIL(&mp->mem_list, memhdr, next);
 	mp->nb_mem_chunks++;
 	return i;
+
+fail:
+	rte_free(memhdr);
+	return ret;
 }
 
 int
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v2 02/11] mempool: rename flag to control IOVA-contiguous objects
  2018-03-25 16:20   ` [PATCH v2 00/11] " Andrew Rybchenko
  2018-03-25 16:20     ` [PATCH v2 01/11] mempool: fix memhdr leak when no objects are populated Andrew Rybchenko
@ 2018-03-25 16:20     ` Andrew Rybchenko
  2018-03-25 16:20     ` [PATCH v2 03/11] mempool: ensure the mempool is initialized before populating Andrew Rybchenko
                       ` (8 subsequent siblings)
  10 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-25 16:20 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ

Flag MEMPOOL_F_NO_PHYS_CONTIG is renamed as MEMPOOL_F_NO_IOVA_CONTIG
to follow IO memory contiguos terminology.
MEMPOOL_F_NO_PHYS_CONTIG is kept for backward compatibility and
deprecated.

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
v1 -> v2:
 - added in v2 as discussed in [1]

[1] https://dpdk.org/ml/archives/dev/2018-March/093345.html

 drivers/net/thunderx/nicvf_ethdev.c | 2 +-
 lib/librte_mempool/rte_mempool.c    | 6 +++---
 lib/librte_mempool/rte_mempool.h    | 9 +++++----
 3 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 067f224..f3be744 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1308,7 +1308,7 @@ nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 	}
 
 	/* Mempool memory must be physically contiguous */
-	if (mp->flags & MEMPOOL_F_NO_PHYS_CONTIG) {
+	if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG) {
 		PMD_INIT_LOG(ERR, "Mempool memory must be physically contiguous");
 		return -EINVAL;
 	}
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 80bf941..6ffa795 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -446,7 +446,7 @@ rte_mempool_populate_iova_tab(struct rte_mempool *mp, char *vaddr,
 	if (mp->nb_mem_chunks != 0)
 		return -EEXIST;
 
-	if (mp->flags & MEMPOOL_F_NO_PHYS_CONTIG)
+	if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG)
 		return rte_mempool_populate_iova(mp, vaddr, RTE_BAD_IOVA,
 			pg_num * pg_sz, free_cb, opaque);
 
@@ -500,7 +500,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 	if (RTE_ALIGN_CEIL(len, pg_sz) != len)
 		return -EINVAL;
 
-	if (mp->flags & MEMPOOL_F_NO_PHYS_CONTIG)
+	if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG)
 		return rte_mempool_populate_iova(mp, addr, RTE_BAD_IOVA,
 			len, free_cb, opaque);
 
@@ -602,7 +602,7 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 			goto fail;
 		}
 
-		if (mp->flags & MEMPOOL_F_NO_PHYS_CONTIG)
+		if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG)
 			iova = RTE_BAD_IOVA;
 		else
 			iova = mz->iova;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 8b1b7f7..e531a15 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -244,7 +244,8 @@ struct rte_mempool {
 #define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
-#define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
+#define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 /**< Don't need IOVA contiguous objs. */
+#define MEMPOOL_F_NO_PHYS_CONTIG MEMPOOL_F_NO_IOVA_CONTIG /* deprecated */
 /**
  * This capability flag is advertised by a mempool handler, if the whole
  * memory area containing the objects must be physically contiguous.
@@ -710,8 +711,8 @@ typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *);
  *   - MEMPOOL_F_SC_GET: If this flag is set, the default behavior
  *     when using rte_mempool_get() or rte_mempool_get_bulk() is
  *     "single-consumer". Otherwise, it is "multi-consumers".
- *   - MEMPOOL_F_NO_PHYS_CONTIG: If set, allocated objects won't
- *     necessarily be contiguous in physical memory.
+ *   - MEMPOOL_F_NO_IOVA_CONTIG: If set, allocated objects won't
+ *     necessarily be contiguous in IO memory.
  * @return
  *   The pointer to the new allocated mempool, on success. NULL on error
  *   with rte_errno set appropriately. Possible rte_errno values include:
@@ -1439,7 +1440,7 @@ rte_mempool_empty(const struct rte_mempool *mp)
  *   A pointer (virtual address) to the element of the pool.
  * @return
  *   The IO address of the elt element.
- *   If the mempool was created with MEMPOOL_F_NO_PHYS_CONTIG, the
+ *   If the mempool was created with MEMPOOL_F_NO_IOVA_CONTIG, the
  *   returned value is RTE_BAD_IOVA.
  */
 static inline rte_iova_t
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v2 03/11] mempool: ensure the mempool is initialized before populating
  2018-03-25 16:20   ` [PATCH v2 00/11] " Andrew Rybchenko
  2018-03-25 16:20     ` [PATCH v2 01/11] mempool: fix memhdr leak when no objects are populated Andrew Rybchenko
  2018-03-25 16:20     ` [PATCH v2 02/11] mempool: rename flag to control IOVA-contiguous objects Andrew Rybchenko
@ 2018-03-25 16:20     ` Andrew Rybchenko
  2018-03-25 16:20     ` [PATCH v2 04/11] mempool: add op to calculate memory size to be allocated Andrew Rybchenko
                       ` (7 subsequent siblings)
  10 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-25 16:20 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Callback to calculate required memory area size may require mempool
driver data to be already allocated and initialized.

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
v1 -> v2:
 - add init check to mempool_ops_alloc_once()
 - move ealier in the patch series since it is required when driver
   ops are called and it is better to have it before new ops are added

RFCv2 -> v1:
 - rename helper function as mempool_ops_alloc_once()

 lib/librte_mempool/rte_mempool.c | 33 ++++++++++++++++++++++++++-------
 1 file changed, 26 insertions(+), 7 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 6ffa795..d8e3720 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -323,6 +323,21 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
 	}
 }
 
+static int
+mempool_ops_alloc_once(struct rte_mempool *mp)
+{
+	int ret;
+
+	/* create the internal ring if not already done */
+	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
+		ret = rte_mempool_ops_alloc(mp);
+		if (ret != 0)
+			return ret;
+		mp->flags |= MEMPOOL_F_POOL_CREATED;
+	}
+	return 0;
+}
+
 /* Add objects in the pool, using a physically contiguous memory
  * zone. Return the number of objects added, or a negative value
  * on error.
@@ -339,13 +354,9 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	struct rte_mempool_memhdr *memhdr;
 	int ret;
 
-	/* create the internal ring if not already done */
-	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
-		ret = rte_mempool_ops_alloc(mp);
-		if (ret != 0)
-			return ret;
-		mp->flags |= MEMPOOL_F_POOL_CREATED;
-	}
+	ret = mempool_ops_alloc_once(mp);
+	if (ret != 0)
+		return ret;
 
 	/* Notify memory area to mempool */
 	ret = rte_mempool_ops_register_memory_area(mp, vaddr, iova, len);
@@ -556,6 +567,10 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	unsigned int mp_flags;
 	int ret;
 
+	ret = mempool_ops_alloc_once(mp);
+	if (ret != 0)
+		return ret;
+
 	/* mempool must not be populated */
 	if (mp->nb_mem_chunks != 0)
 		return -EEXIST;
@@ -667,6 +682,10 @@ rte_mempool_populate_anon(struct rte_mempool *mp)
 		return 0;
 	}
 
+	ret = mempool_ops_alloc_once(mp);
+	if (ret != 0)
+		return ret;
+
 	/* get chunk of virtually continuous memory */
 	size = get_anon_size(mp);
 	addr = mmap(NULL, size, PROT_READ | PROT_WRITE,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v2 04/11] mempool: add op to calculate memory size to be allocated
  2018-03-25 16:20   ` [PATCH v2 00/11] " Andrew Rybchenko
                       ` (2 preceding siblings ...)
  2018-03-25 16:20     ` [PATCH v2 03/11] mempool: ensure the mempool is initialized before populating Andrew Rybchenko
@ 2018-03-25 16:20     ` Andrew Rybchenko
  2018-03-25 16:20     ` [PATCH v2 05/11] mempool: add op to populate objects using provided memory Andrew Rybchenko
                       ` (6 subsequent siblings)
  10 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-25 16:20 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ

Size of memory chunk required to populate mempool objects depends
on how objects are stored in the memory. Different mempool drivers
may have different requirements and a new operation allows to
calculate memory size in accordance with driver requirements and
advertise requirements on minimum memory chunk size and alignment
in a generic way.

Bump ABI version since the patch breaks it.

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
v1 -> v2:
 - clarify min_chunk_size meaning
 - rebase on top of patch series which fixes library version in meson
   build

RFCv2 -> v1:
 - move default calc_mem_size callback to rte_mempool_ops_default.c
 - add ABI changes to release notes
 - name default callback consistently: rte_mempool_op_<callback>_default()
 - bump ABI version since it is the first patch which breaks ABI
 - describe default callback behaviour in details
 - avoid introduction of internal function to cope with deprecation
   (keep it to deprecation patch)
 - move cache-line or page boundary chunk alignment to default callback
 - highlight that min_chunk_size and align parameters are output only

 doc/guides/rel_notes/deprecation.rst         |  3 +-
 doc/guides/rel_notes/release_18_05.rst       |  7 ++-
 lib/librte_mempool/Makefile                  |  3 +-
 lib/librte_mempool/meson.build               |  5 +-
 lib/librte_mempool/rte_mempool.c             | 43 +++++++-------
 lib/librte_mempool/rte_mempool.h             | 86 +++++++++++++++++++++++++++-
 lib/librte_mempool/rte_mempool_ops.c         | 18 ++++++
 lib/librte_mempool/rte_mempool_ops_default.c | 38 ++++++++++++
 lib/librte_mempool/rte_mempool_version.map   |  7 +++
 9 files changed, 182 insertions(+), 28 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_ops_default.c

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 6594585..e02d4ca 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -72,8 +72,7 @@ Deprecation Notices
 
   - removal of ``get_capabilities`` mempool ops and related flags.
   - substitute ``register_memory_area`` with ``populate`` ops.
-  - addition of new ops to customize required memory chunk calculation,
-    customize objects population and allocate contiguous
+  - addition of new ops to customize objects population and allocate contiguous
     block of objects if underlying driver supports it.
 
 * mbuf: The control mbuf API will be removed in v18.05. The impacted
diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index f2525bb..59583ea 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -80,6 +80,11 @@ ABI Changes
    Also, make sure to start the actual text at the margin.
    =========================================================
 
+* **Changed rte_mempool_ops structure.**
+
+  A new callback ``calc_mem_size`` has been added to ``rte_mempool_ops``
+  to allow to customize required memory size calculation.
+
 
 Removed Items
 -------------
@@ -152,7 +157,7 @@ The libraries prepended with a plus sign were incremented in this version.
      librte_latencystats.so.1
      librte_lpm.so.2
      librte_mbuf.so.3
-     librte_mempool.so.3
+   + librte_mempool.so.4
    + librte_meter.so.2
      librte_metrics.so.1
      librte_net.so.1
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 24e735a..072740f 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -11,11 +11,12 @@ LDLIBS += -lrte_eal -lrte_ring
 
 EXPORT_MAP := rte_mempool_version.map
 
-LIBABIVER := 3
+LIBABIVER := 4
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops_default.c
 # install includes
 SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
 
diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build
index 712720f..9e3b527 100644
--- a/lib/librte_mempool/meson.build
+++ b/lib/librte_mempool/meson.build
@@ -1,7 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2017 Intel Corporation
 
-version = 3
-sources = files('rte_mempool.c', 'rte_mempool_ops.c')
+version = 4
+sources = files('rte_mempool.c', 'rte_mempool_ops.c',
+		'rte_mempool_ops_default.c')
 headers = files('rte_mempool.h')
 deps += ['ring']
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index d8e3720..dd2d0fe 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -561,10 +561,10 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	unsigned int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	const struct rte_memzone *mz;
-	size_t size, total_elt_sz, align, pg_sz, pg_shift;
+	ssize_t mem_size;
+	size_t align, pg_sz, pg_shift;
 	rte_iova_t iova;
 	unsigned mz_id, n;
-	unsigned int mp_flags;
 	int ret;
 
 	ret = mempool_ops_alloc_once(mp);
@@ -575,29 +575,23 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	if (mp->nb_mem_chunks != 0)
 		return -EEXIST;
 
-	/* Get mempool capabilities */
-	mp_flags = 0;
-	ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
-	if ((ret < 0) && (ret != -ENOTSUP))
-		return ret;
-
-	/* update mempool capabilities */
-	mp->flags |= mp_flags;
-
 	if (rte_eal_has_hugepages()) {
 		pg_shift = 0; /* not needed, zone is physically contiguous */
 		pg_sz = 0;
-		align = RTE_CACHE_LINE_SIZE;
 	} else {
 		pg_sz = getpagesize();
 		pg_shift = rte_bsf32(pg_sz);
-		align = pg_sz;
 	}
 
-	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 	for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) {
-		size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift,
-						mp->flags);
+		size_t min_chunk_size;
+
+		mem_size = rte_mempool_ops_calc_mem_size(mp, n, pg_shift,
+				&min_chunk_size, &align);
+		if (mem_size < 0) {
+			ret = mem_size;
+			goto fail;
+		}
 
 		ret = snprintf(mz_name, sizeof(mz_name),
 			RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id);
@@ -606,7 +600,7 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 			goto fail;
 		}
 
-		mz = rte_memzone_reserve_aligned(mz_name, size,
+		mz = rte_memzone_reserve_aligned(mz_name, mem_size,
 			mp->socket_id, mz_flags, align);
 		/* not enough memory, retry with the biggest zone we have */
 		if (mz == NULL)
@@ -617,6 +611,12 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 			goto fail;
 		}
 
+		if (mz->len < min_chunk_size) {
+			rte_memzone_free(mz);
+			ret = -ENOMEM;
+			goto fail;
+		}
+
 		if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG)
 			iova = RTE_BAD_IOVA;
 		else
@@ -649,13 +649,14 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 static size_t
 get_anon_size(const struct rte_mempool *mp)
 {
-	size_t size, total_elt_sz, pg_sz, pg_shift;
+	size_t size, pg_sz, pg_shift;
+	size_t min_chunk_size;
+	size_t align;
 
 	pg_sz = getpagesize();
 	pg_shift = rte_bsf32(pg_sz);
-	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
-	size = rte_mempool_xmem_size(mp->size, total_elt_sz, pg_shift,
-					mp->flags);
+	size = rte_mempool_ops_calc_mem_size(mp, mp->size, pg_shift,
+					     &min_chunk_size, &align);
 
 	return size;
 }
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index e531a15..191255d 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -400,6 +400,62 @@ typedef int (*rte_mempool_get_capabilities_t)(const struct rte_mempool *mp,
 typedef int (*rte_mempool_ops_register_memory_area_t)
 (const struct rte_mempool *mp, char *vaddr, rte_iova_t iova, size_t len);
 
+/**
+ * Calculate memory size required to store given number of objects.
+ *
+ * If mempool objects are not required to be IOVA-contiguous
+ * (the flag MEMPOOL_F_NO_IOVA_CONTIG is set), min_chunk_size defines
+ * virtually contiguous chunk size. Otherwise, if mempool objects must
+ * be IOVA-contiguous (the flag MEMPOOL_F_NO_IOVA_CONTIG is clear),
+ * min_chunk_size defines IOVA-contiguous chunk size.
+ *
+ * @param[in] mp
+ *   Pointer to the memory pool.
+ * @param[in] obj_num
+ *   Number of objects.
+ * @param[in] pg_shift
+ *   LOG2 of the physical pages size. If set to 0, ignore page boundaries.
+ * @param[out] min_chunk_size
+ *   Location for minimum size of the memory chunk which may be used to
+ *   store memory pool objects.
+ * @param[out] align
+ *   Location for required memory chunk alignment.
+ * @return
+ *   Required memory size aligned at page boundary.
+ */
+typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp,
+		uint32_t obj_num,  uint32_t pg_shift,
+		size_t *min_chunk_size, size_t *align);
+
+/**
+ * Default way to calculate memory size required to store given number of
+ * objects.
+ *
+ * If page boundaries may be ignored, it is just a product of total
+ * object size including header and trailer and number of objects.
+ * Otherwise, it is a number of pages required to store given number of
+ * objects without crossing page boundary.
+ *
+ * Note that if object size is bigger than page size, then it assumes
+ * that pages are grouped in subsets of physically continuous pages big
+ * enough to store at least one object.
+ *
+ * If mempool driver requires object addresses to be block size aligned
+ * (MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS), space for one extra element is
+ * reserved to be able to meet the requirement.
+ *
+ * Minimum size of memory chunk is either all required space, if
+ * capabilities say that whole memory area must be physically contiguous
+ * (MEMPOOL_F_CAPA_PHYS_CONTIG), or a maximum of the page size and total
+ * element size.
+ *
+ * Required memory chunk alignment is a maximum of page size and cache
+ * line size.
+ */
+ssize_t rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
+		uint32_t obj_num, uint32_t pg_shift,
+		size_t *min_chunk_size, size_t *align);
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -416,6 +472,11 @@ struct rte_mempool_ops {
 	 * Notify new memory area to mempool
 	 */
 	rte_mempool_ops_register_memory_area_t register_memory_area;
+	/**
+	 * Optional callback to calculate memory size required to
+	 * store specified number of objects.
+	 */
+	rte_mempool_calc_mem_size_t calc_mem_size;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -565,6 +626,29 @@ rte_mempool_ops_register_memory_area(const struct rte_mempool *mp,
 				char *vaddr, rte_iova_t iova, size_t len);
 
 /**
+ * @internal wrapper for mempool_ops calc_mem_size callback.
+ * API to calculate size of memory required to store specified number of
+ * object.
+ *
+ * @param[in] mp
+ *   Pointer to the memory pool.
+ * @param[in] obj_num
+ *   Number of objects.
+ * @param[in] pg_shift
+ *   LOG2 of the physical pages size. If set to 0, ignore page boundaries.
+ * @param[out] min_chunk_size
+ *   Location for minimum size of the memory chunk which may be used to
+ *   store memory pool objects.
+ * @param[out] align
+ *   Location for required memory chunk alignment.
+ * @return
+ *   Required memory size aligned at page boundary.
+ */
+ssize_t rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
+				      uint32_t obj_num, uint32_t pg_shift,
+				      size_t *min_chunk_size, size_t *align);
+
+/**
  * @internal wrapper for mempool_ops free callback.
  *
  * @param mp
@@ -1534,7 +1618,7 @@ uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  * of objects. Assume that the memory buffer will be aligned at page
  * boundary.
  *
- * Note that if object size is bigger then page size, then it assumes
+ * Note that if object size is bigger than page size, then it assumes
  * that pages are grouped in subsets of physically continuous pages big
  * enough to store at least one object.
  *
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 0732255..26908cc 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -59,6 +59,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->get_count = h->get_count;
 	ops->get_capabilities = h->get_capabilities;
 	ops->register_memory_area = h->register_memory_area;
+	ops->calc_mem_size = h->calc_mem_size;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -123,6 +124,23 @@ rte_mempool_ops_register_memory_area(const struct rte_mempool *mp, char *vaddr,
 	return ops->register_memory_area(mp, vaddr, iova, len);
 }
 
+/* wrapper to notify new memory area to external mempool */
+ssize_t
+rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
+				uint32_t obj_num, uint32_t pg_shift,
+				size_t *min_chunk_size, size_t *align)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+
+	if (ops->calc_mem_size == NULL)
+		return rte_mempool_op_calc_mem_size_default(mp, obj_num,
+				pg_shift, min_chunk_size, align);
+
+	return ops->calc_mem_size(mp, obj_num, pg_shift, min_chunk_size, align);
+}
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c
new file mode 100644
index 0000000..57fe79b
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_ops_default.c
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016 Intel Corporation.
+ * Copyright(c) 2016 6WIND S.A.
+ * Copyright(c) 2018 Solarflare Communications Inc.
+ */
+
+#include <rte_mempool.h>
+
+ssize_t
+rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
+				     uint32_t obj_num, uint32_t pg_shift,
+				     size_t *min_chunk_size, size_t *align)
+{
+	unsigned int mp_flags;
+	int ret;
+	size_t total_elt_sz;
+	size_t mem_size;
+
+	/* Get mempool capabilities */
+	mp_flags = 0;
+	ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
+	if ((ret < 0) && (ret != -ENOTSUP))
+		return ret;
+
+	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
+
+	mem_size = rte_mempool_xmem_size(obj_num, total_elt_sz, pg_shift,
+					 mp->flags | mp_flags);
+
+	if (mp_flags & MEMPOOL_F_CAPA_PHYS_CONTIG)
+		*min_chunk_size = mem_size;
+	else
+		*min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz);
+
+	*align = RTE_MAX((size_t)RTE_CACHE_LINE_SIZE, (size_t)1 << pg_shift);
+
+	return mem_size;
+}
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 62b76f9..cb38189 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -51,3 +51,10 @@ DPDK_17.11 {
 	rte_mempool_populate_iova_tab;
 
 } DPDK_16.07;
+
+DPDK_18.05 {
+	global:
+
+	rte_mempool_op_calc_mem_size_default;
+
+} DPDK_17.11;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v2 05/11] mempool: add op to populate objects using provided memory
  2018-03-25 16:20   ` [PATCH v2 00/11] " Andrew Rybchenko
                       ` (3 preceding siblings ...)
  2018-03-25 16:20     ` [PATCH v2 04/11] mempool: add op to calculate memory size to be allocated Andrew Rybchenko
@ 2018-03-25 16:20     ` Andrew Rybchenko
  2018-03-25 16:20     ` [PATCH v2 06/11] mempool: remove callback to get capabilities Andrew Rybchenko
                       ` (5 subsequent siblings)
  10 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-25 16:20 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ

The callback allows to customize how objects are stored in the
memory chunk. Default implementation of the callback which simply
puts objects one by one is available.

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
v1 -> v2:
 - fix memory leak if off is bigger than len

RFCv2 -> v1:
 - advertise ABI changes in release notes
 - use consistent name for default callback:
   rte_mempool_op_<callback>_default()
 - add opaque data pointer to populated object callback
 - move default callback to dedicated file

 doc/guides/rel_notes/deprecation.rst         |  2 +-
 doc/guides/rel_notes/release_18_05.rst       |  2 +
 lib/librte_mempool/rte_mempool.c             | 23 ++++---
 lib/librte_mempool/rte_mempool.h             | 90 ++++++++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c         | 21 +++++++
 lib/librte_mempool/rte_mempool_ops_default.c | 24 ++++++++
 lib/librte_mempool/rte_mempool_version.map   |  1 +
 7 files changed, 149 insertions(+), 14 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index e02d4ca..c06fc67 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -72,7 +72,7 @@ Deprecation Notices
 
   - removal of ``get_capabilities`` mempool ops and related flags.
   - substitute ``register_memory_area`` with ``populate`` ops.
-  - addition of new ops to customize objects population and allocate contiguous
+  - addition of new op to allocate contiguous
     block of objects if underlying driver supports it.
 
 * mbuf: The control mbuf API will be removed in v18.05. The impacted
diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index 59583ea..abaefe5 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -84,6 +84,8 @@ ABI Changes
 
   A new callback ``calc_mem_size`` has been added to ``rte_mempool_ops``
   to allow to customize required memory size calculation.
+  A new callback ``populate`` has been added to ``rte_mempool_ops``
+  to allow to customize objects population.
 
 
 Removed Items
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index dd2d0fe..d917dc7 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -99,7 +99,8 @@ static unsigned optimize_object_size(unsigned obj_size)
 }
 
 static void
-mempool_add_elem(struct rte_mempool *mp, void *obj, rte_iova_t iova)
+mempool_add_elem(struct rte_mempool *mp, __rte_unused void *opaque,
+		 void *obj, rte_iova_t iova)
 {
 	struct rte_mempool_objhdr *hdr;
 	struct rte_mempool_objtlr *tlr __rte_unused;
@@ -116,9 +117,6 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, rte_iova_t iova)
 	tlr = __mempool_get_trailer(obj);
 	tlr->cookie = RTE_MEMPOOL_TRAILER_COOKIE;
 #endif
-
-	/* enqueue in ring */
-	rte_mempool_ops_enqueue_bulk(mp, &obj, 1);
 }
 
 /* call obj_cb() for each mempool element */
@@ -407,17 +405,16 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	else
 		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
 
-	while (off + total_elt_sz <= len && mp->populated_size < mp->size) {
-		off += mp->header_size;
-		if (iova == RTE_BAD_IOVA)
-			mempool_add_elem(mp, (char *)vaddr + off,
-				RTE_BAD_IOVA);
-		else
-			mempool_add_elem(mp, (char *)vaddr + off, iova + off);
-		off += mp->elt_size + mp->trailer_size;
-		i++;
+	if (off > len) {
+		ret = -EINVAL;
+		goto fail;
 	}
 
+	i = rte_mempool_ops_populate(mp, mp->size - mp->populated_size,
+		(char *)vaddr + off,
+		(iova == RTE_BAD_IOVA) ? RTE_BAD_IOVA : (iova + off),
+		len - off, mempool_add_elem, NULL);
+
 	/* not enough room to store one object */
 	if (i == 0) {
 		ret = -EINVAL;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 191255d..754261e 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -456,6 +456,63 @@ ssize_t rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
 		uint32_t obj_num, uint32_t pg_shift,
 		size_t *min_chunk_size, size_t *align);
 
+/**
+ * Function to be called for each populated object.
+ *
+ * @param[in] mp
+ *   A pointer to the mempool structure.
+ * @param[in] opaque
+ *   An opaque pointer passed to iterator.
+ * @param[in] vaddr
+ *   Object virtual address.
+ * @param[in] iova
+ *   Input/output virtual address of the object or RTE_BAD_IOVA.
+ */
+typedef void (rte_mempool_populate_obj_cb_t)(struct rte_mempool *mp,
+		void *opaque, void *vaddr, rte_iova_t iova);
+
+/**
+ * Populate memory pool objects using provided memory chunk.
+ *
+ * Populated objects should be enqueued to the pool, e.g. using
+ * rte_mempool_ops_enqueue_bulk().
+ *
+ * If the given IO address is unknown (iova = RTE_BAD_IOVA),
+ * the chunk doesn't need to be physically contiguous (only virtually),
+ * and allocated objects may span two pages.
+ *
+ * @param[in] mp
+ *   A pointer to the mempool structure.
+ * @param[in] max_objs
+ *   Maximum number of objects to be populated.
+ * @param[in] vaddr
+ *   The virtual address of memory that should be used to store objects.
+ * @param[in] iova
+ *   The IO address
+ * @param[in] len
+ *   The length of memory in bytes.
+ * @param[in] obj_cb
+ *   Callback function to be executed for each populated object.
+ * @param[in] obj_cb_arg
+ *   An opaque pointer passed to the callback function.
+ * @return
+ *   The number of objects added on success.
+ *   On error, no objects are populated and a negative errno is returned.
+ */
+typedef int (*rte_mempool_populate_t)(struct rte_mempool *mp,
+		unsigned int max_objs,
+		void *vaddr, rte_iova_t iova, size_t len,
+		rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg);
+
+/**
+ * Default way to populate memory pool object using provided memory
+ * chunk: just slice objects one by one.
+ */
+int rte_mempool_op_populate_default(struct rte_mempool *mp,
+		unsigned int max_objs,
+		void *vaddr, rte_iova_t iova, size_t len,
+		rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg);
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -477,6 +534,11 @@ struct rte_mempool_ops {
 	 * store specified number of objects.
 	 */
 	rte_mempool_calc_mem_size_t calc_mem_size;
+	/**
+	 * Optional callback to populate mempool objects using
+	 * provided memory chunk.
+	 */
+	rte_mempool_populate_t populate;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -649,6 +711,34 @@ ssize_t rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
 				      size_t *min_chunk_size, size_t *align);
 
 /**
+ * @internal wrapper for mempool_ops populate callback.
+ *
+ * Populate memory pool objects using provided memory chunk.
+ *
+ * @param[in] mp
+ *   A pointer to the mempool structure.
+ * @param[in] max_objs
+ *   Maximum number of objects to be populated.
+ * @param[in] vaddr
+ *   The virtual address of memory that should be used to store objects.
+ * @param[in] iova
+ *   The IO address
+ * @param[in] len
+ *   The length of memory in bytes.
+ * @param[in] obj_cb
+ *   Callback function to be executed for each populated object.
+ * @param[in] obj_cb_arg
+ *   An opaque pointer passed to the callback function.
+ * @return
+ *   The number of objects added on success.
+ *   On error, no objects are populated and a negative errno is returned.
+ */
+int rte_mempool_ops_populate(struct rte_mempool *mp, unsigned int max_objs,
+			     void *vaddr, rte_iova_t iova, size_t len,
+			     rte_mempool_populate_obj_cb_t *obj_cb,
+			     void *obj_cb_arg);
+
+/**
  * @internal wrapper for mempool_ops free callback.
  *
  * @param mp
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 26908cc..1a7f39f 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -60,6 +60,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->get_capabilities = h->get_capabilities;
 	ops->register_memory_area = h->register_memory_area;
 	ops->calc_mem_size = h->calc_mem_size;
+	ops->populate = h->populate;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -141,6 +142,26 @@ rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
 	return ops->calc_mem_size(mp, obj_num, pg_shift, min_chunk_size, align);
 }
 
+/* wrapper to populate memory pool objects using provided memory chunk */
+int
+rte_mempool_ops_populate(struct rte_mempool *mp, unsigned int max_objs,
+				void *vaddr, rte_iova_t iova, size_t len,
+				rte_mempool_populate_obj_cb_t *obj_cb,
+				void *obj_cb_arg)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+
+	if (ops->populate == NULL)
+		return rte_mempool_op_populate_default(mp, max_objs, vaddr,
+						       iova, len, obj_cb,
+						       obj_cb_arg);
+
+	return ops->populate(mp, max_objs, vaddr, iova, len, obj_cb,
+			     obj_cb_arg);
+}
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c
index 57fe79b..57295f7 100644
--- a/lib/librte_mempool/rte_mempool_ops_default.c
+++ b/lib/librte_mempool/rte_mempool_ops_default.c
@@ -36,3 +36,27 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
 
 	return mem_size;
 }
+
+int
+rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int max_objs,
+		void *vaddr, rte_iova_t iova, size_t len,
+		rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
+{
+	size_t total_elt_sz;
+	size_t off;
+	unsigned int i;
+	void *obj;
+
+	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
+
+	for (off = 0, i = 0; off + total_elt_sz <= len && i < max_objs; i++) {
+		off += mp->header_size;
+		obj = (char *)vaddr + off;
+		obj_cb(mp, obj_cb_arg, obj,
+		       (iova == RTE_BAD_IOVA) ? RTE_BAD_IOVA : (iova + off));
+		rte_mempool_ops_enqueue_bulk(mp, &obj, 1);
+		off += mp->elt_size + mp->trailer_size;
+	}
+
+	return i;
+}
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index cb38189..41a0b09 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -56,5 +56,6 @@ DPDK_18.05 {
 	global:
 
 	rte_mempool_op_calc_mem_size_default;
+	rte_mempool_op_populate_default;
 
 } DPDK_17.11;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v2 06/11] mempool: remove callback to get capabilities
  2018-03-25 16:20   ` [PATCH v2 00/11] " Andrew Rybchenko
                       ` (4 preceding siblings ...)
  2018-03-25 16:20     ` [PATCH v2 05/11] mempool: add op to populate objects using provided memory Andrew Rybchenko
@ 2018-03-25 16:20     ` Andrew Rybchenko
  2018-03-25 16:20     ` [PATCH v2 07/11] mempool: deprecate xmem functions Andrew Rybchenko
                       ` (4 subsequent siblings)
  10 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-25 16:20 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Santosh Shukla, Jerin Jacob

The callback was introduced to let generic code to know octeontx
mempool driver requirements to use single physically contiguous
memory chunk to store all objects and align object address to
total object size. Now these requirements are met using a new
callbacks to calculate required memory chunk size and to populate
objects using provided memory chunk.

These capability flags are not used anywhere else.

Restricting capabilities to flags is not generic and likely to
be insufficient to describe mempool driver features. If required
in the future, API which returns structured information may be
added.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
v1 -> v2:
 - fix typo
 - rebase on top of patch which renames MEMPOOL_F_NO_PHYS_CONTIG

RFCv2 -> v1:
 - squash mempool/octeontx patches to add calc_mem_size and populate
   callbacks to this one in order to avoid breakages in the middle of
   patchset
 - advertise API changes in release notes

 doc/guides/rel_notes/deprecation.rst            |  1 -
 doc/guides/rel_notes/release_18_05.rst          | 11 +++++
 drivers/mempool/octeontx/rte_mempool_octeontx.c | 59 +++++++++++++++++++++----
 lib/librte_mempool/rte_mempool.c                | 44 ++----------------
 lib/librte_mempool/rte_mempool.h                | 52 +---------------------
 lib/librte_mempool/rte_mempool_ops.c            | 14 ------
 lib/librte_mempool/rte_mempool_ops_default.c    | 15 +------
 lib/librte_mempool/rte_mempool_version.map      |  1 -
 8 files changed, 68 insertions(+), 129 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index c06fc67..4deed9a 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -70,7 +70,6 @@ Deprecation Notices
 
   The following changes are planned:
 
-  - removal of ``get_capabilities`` mempool ops and related flags.
   - substitute ``register_memory_area`` with ``populate`` ops.
   - addition of new op to allocate contiguous
     block of objects if underlying driver supports it.
diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index abaefe5..c50f26c 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -66,6 +66,14 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =========================================================
 
+* **Removed mempool capability flags and related functions.**
+
+  Flags ``MEMPOOL_F_CAPA_PHYS_CONTIG`` and
+  ``MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS`` were used by octeontx mempool
+  driver to customize generic mempool library behaviour.
+  Now the new driver callbacks ``calc_mem_size`` and ``populate`` may be
+  used to achieve it without specific knowledge in the generic code.
+
 
 ABI Changes
 -----------
@@ -86,6 +94,9 @@ ABI Changes
   to allow to customize required memory size calculation.
   A new callback ``populate`` has been added to ``rte_mempool_ops``
   to allow to customize objects population.
+  Callback ``get_capabilities`` has been removed from ``rte_mempool_ops``
+  since its features are covered by ``calc_mem_size`` and ``populate``
+  callbacks.
 
 
 Removed Items
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index d143d05..64ed528 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -126,14 +126,29 @@ octeontx_fpavf_get_count(const struct rte_mempool *mp)
 	return octeontx_fpa_bufpool_free_count(pool);
 }
 
-static int
-octeontx_fpavf_get_capabilities(const struct rte_mempool *mp,
-				unsigned int *flags)
+static ssize_t
+octeontx_fpavf_calc_mem_size(const struct rte_mempool *mp,
+			     uint32_t obj_num, uint32_t pg_shift,
+			     size_t *min_chunk_size, size_t *align)
 {
-	RTE_SET_USED(mp);
-	*flags |= (MEMPOOL_F_CAPA_PHYS_CONTIG |
-			MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS);
-	return 0;
+	ssize_t mem_size;
+
+	/*
+	 * Simply need space for one more object to be able to
+	 * fulfil alignment requirements.
+	 */
+	mem_size = rte_mempool_op_calc_mem_size_default(mp, obj_num + 1,
+							pg_shift,
+							min_chunk_size, align);
+	if (mem_size >= 0) {
+		/*
+		 * Memory area which contains objects must be physically
+		 * contiguous.
+		 */
+		*min_chunk_size = mem_size;
+	}
+
+	return mem_size;
 }
 
 static int
@@ -150,6 +165,33 @@ octeontx_fpavf_register_memory_area(const struct rte_mempool *mp,
 	return octeontx_fpavf_pool_set_range(pool_bar, len, vaddr, gpool);
 }
 
+static int
+octeontx_fpavf_populate(struct rte_mempool *mp, unsigned int max_objs,
+			void *vaddr, rte_iova_t iova, size_t len,
+			rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
+{
+	size_t total_elt_sz;
+	size_t off;
+
+	if (iova == RTE_BAD_IOVA)
+		return -EINVAL;
+
+	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
+
+	/* align object start address to a multiple of total_elt_sz */
+	off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
+
+	if (len < off)
+		return -EINVAL;
+
+	vaddr = (char *)vaddr + off;
+	iova += off;
+	len -= off;
+
+	return rte_mempool_op_populate_default(mp, max_objs, vaddr, iova, len,
+					       obj_cb, obj_cb_arg);
+}
+
 static struct rte_mempool_ops octeontx_fpavf_ops = {
 	.name = "octeontx_fpavf",
 	.alloc = octeontx_fpavf_alloc,
@@ -157,8 +199,9 @@ static struct rte_mempool_ops octeontx_fpavf_ops = {
 	.enqueue = octeontx_fpavf_enqueue,
 	.dequeue = octeontx_fpavf_dequeue,
 	.get_count = octeontx_fpavf_get_count,
-	.get_capabilities = octeontx_fpavf_get_capabilities,
 	.register_memory_area = octeontx_fpavf_register_memory_area,
+	.calc_mem_size = octeontx_fpavf_calc_mem_size,
+	.populate = octeontx_fpavf_populate,
 };
 
 MEMPOOL_REGISTER_OPS(octeontx_fpavf_ops);
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index d917dc7..40eedde 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -208,15 +208,9 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  */
 size_t
 rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
-		      unsigned int flags)
+		      __rte_unused unsigned int flags)
 {
 	size_t obj_per_page, pg_num, pg_sz;
-	unsigned int mask;
-
-	mask = MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS | MEMPOOL_F_CAPA_PHYS_CONTIG;
-	if ((flags & mask) == mask)
-		/* alignment need one additional object */
-		elt_num += 1;
 
 	if (total_elt_sz == 0)
 		return 0;
@@ -240,18 +234,12 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
 ssize_t
 rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const rte_iova_t iova[], uint32_t pg_num,
-	uint32_t pg_shift, unsigned int flags)
+	uint32_t pg_shift, __rte_unused unsigned int flags)
 {
 	uint32_t elt_cnt = 0;
 	rte_iova_t start, end;
 	uint32_t iova_idx;
 	size_t pg_sz = (size_t)1 << pg_shift;
-	unsigned int mask;
-
-	mask = MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS | MEMPOOL_F_CAPA_PHYS_CONTIG;
-	if ((flags & mask) == mask)
-		/* alignment need one additional object */
-		elt_num += 1;
 
 	/* if iova is NULL, assume contiguous memory */
 	if (iova == NULL) {
@@ -345,8 +333,6 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
 	void *opaque)
 {
-	unsigned total_elt_sz;
-	unsigned int mp_capa_flags;
 	unsigned i = 0;
 	size_t off;
 	struct rte_mempool_memhdr *memhdr;
@@ -365,27 +351,6 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	if (mp->populated_size >= mp->size)
 		return -ENOSPC;
 
-	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
-
-	/* Get mempool capabilities */
-	mp_capa_flags = 0;
-	ret = rte_mempool_ops_get_capabilities(mp, &mp_capa_flags);
-	if ((ret < 0) && (ret != -ENOTSUP))
-		return ret;
-
-	/* update mempool capabilities */
-	mp->flags |= mp_capa_flags;
-
-	/* Detect pool area has sufficient space for elements */
-	if (mp_capa_flags & MEMPOOL_F_CAPA_PHYS_CONTIG) {
-		if (len < total_elt_sz * mp->size) {
-			RTE_LOG(ERR, MEMPOOL,
-				"pool area %" PRIx64 " not enough\n",
-				(uint64_t)len);
-			return -ENOSPC;
-		}
-	}
-
 	memhdr = rte_zmalloc("MEMPOOL_MEMHDR", sizeof(*memhdr), 0);
 	if (memhdr == NULL)
 		return -ENOMEM;
@@ -397,10 +362,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	memhdr->free_cb = free_cb;
 	memhdr->opaque = opaque;
 
-	if (mp_capa_flags & MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS)
-		/* align object start address to a multiple of total_elt_sz */
-		off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
-	else if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
 		off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
 	else
 		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 754261e..0b83d5e 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -246,24 +246,6 @@ struct rte_mempool {
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 /**< Don't need IOVA contiguous objs. */
 #define MEMPOOL_F_NO_PHYS_CONTIG MEMPOOL_F_NO_IOVA_CONTIG /* deprecated */
-/**
- * This capability flag is advertised by a mempool handler, if the whole
- * memory area containing the objects must be physically contiguous.
- * Note: This flag should not be passed by application.
- */
-#define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040
-/**
- * This capability flag is advertised by a mempool handler. Used for a case
- * where mempool driver wants object start address(vaddr) aligned to block
- * size(/ total element size).
- *
- * Note:
- * - This flag should not be passed by application.
- *   Flag used for mempool driver only.
- * - Mempool driver must also set MEMPOOL_F_CAPA_PHYS_CONTIG flag along with
- *   MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS.
- */
-#define MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS 0x0080
 
 /**
  * @internal When debug is enabled, store some statistics.
@@ -389,12 +371,6 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
 typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
 
 /**
- * Get the mempool capabilities.
- */
-typedef int (*rte_mempool_get_capabilities_t)(const struct rte_mempool *mp,
-		unsigned int *flags);
-
-/**
  * Notify new memory area to mempool.
  */
 typedef int (*rte_mempool_ops_register_memory_area_t)
@@ -440,13 +416,7 @@ typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp,
  * that pages are grouped in subsets of physically continuous pages big
  * enough to store at least one object.
  *
- * If mempool driver requires object addresses to be block size aligned
- * (MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS), space for one extra element is
- * reserved to be able to meet the requirement.
- *
- * Minimum size of memory chunk is either all required space, if
- * capabilities say that whole memory area must be physically contiguous
- * (MEMPOOL_F_CAPA_PHYS_CONTIG), or a maximum of the page size and total
+ * Minimum size of memory chunk is a maximum of the page size and total
  * element size.
  *
  * Required memory chunk alignment is a maximum of page size and cache
@@ -522,10 +492,6 @@ struct rte_mempool_ops {
 	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
 	rte_mempool_get_count get_count; /**< Get qty of available objs. */
 	/**
-	 * Get the mempool capabilities
-	 */
-	rte_mempool_get_capabilities_t get_capabilities;
-	/**
 	 * Notify new memory area to mempool
 	 */
 	rte_mempool_ops_register_memory_area_t register_memory_area;
@@ -651,22 +617,6 @@ unsigned
 rte_mempool_ops_get_count(const struct rte_mempool *mp);
 
 /**
- * @internal wrapper for mempool_ops get_capabilities callback.
- *
- * @param mp [in]
- *   Pointer to the memory pool.
- * @param flags [out]
- *   Pointer to the mempool flags.
- * @return
- *   - 0: Success; The mempool driver has advertised his pool capabilities in
- *   flags param.
- *   - -ENOTSUP - doesn't support get_capabilities ops (valid case).
- *   - Otherwise, pool create fails.
- */
-int
-rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
-					unsigned int *flags);
-/**
  * @internal wrapper for mempool_ops register_memory_area callback.
  * API to notify the mempool handler when a new memory area is added to pool.
  *
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 1a7f39f..6ac669a 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -57,7 +57,6 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->enqueue = h->enqueue;
 	ops->dequeue = h->dequeue;
 	ops->get_count = h->get_count;
-	ops->get_capabilities = h->get_capabilities;
 	ops->register_memory_area = h->register_memory_area;
 	ops->calc_mem_size = h->calc_mem_size;
 	ops->populate = h->populate;
@@ -99,19 +98,6 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp)
 	return ops->get_count(mp);
 }
 
-/* wrapper to get external mempool capabilities. */
-int
-rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
-					unsigned int *flags)
-{
-	struct rte_mempool_ops *ops;
-
-	ops = rte_mempool_get_ops(mp->ops_index);
-
-	RTE_FUNC_PTR_OR_ERR_RET(ops->get_capabilities, -ENOTSUP);
-	return ops->get_capabilities(mp, flags);
-}
-
 /* wrapper to notify new memory area to external mempool */
 int
 rte_mempool_ops_register_memory_area(const struct rte_mempool *mp, char *vaddr,
diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c
index 57295f7..3defc15 100644
--- a/lib/librte_mempool/rte_mempool_ops_default.c
+++ b/lib/librte_mempool/rte_mempool_ops_default.c
@@ -11,26 +11,15 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
 				     uint32_t obj_num, uint32_t pg_shift,
 				     size_t *min_chunk_size, size_t *align)
 {
-	unsigned int mp_flags;
-	int ret;
 	size_t total_elt_sz;
 	size_t mem_size;
 
-	/* Get mempool capabilities */
-	mp_flags = 0;
-	ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
-	if ((ret < 0) && (ret != -ENOTSUP))
-		return ret;
-
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 
 	mem_size = rte_mempool_xmem_size(obj_num, total_elt_sz, pg_shift,
-					 mp->flags | mp_flags);
+					 mp->flags);
 
-	if (mp_flags & MEMPOOL_F_CAPA_PHYS_CONTIG)
-		*min_chunk_size = mem_size;
-	else
-		*min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz);
+	*min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz);
 
 	*align = RTE_MAX((size_t)RTE_CACHE_LINE_SIZE, (size_t)1 << pg_shift);
 
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 41a0b09..637f73f 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -45,7 +45,6 @@ DPDK_16.07 {
 DPDK_17.11 {
 	global:
 
-	rte_mempool_ops_get_capabilities;
 	rte_mempool_ops_register_memory_area;
 	rte_mempool_populate_iova;
 	rte_mempool_populate_iova_tab;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v2 07/11] mempool: deprecate xmem functions
  2018-03-25 16:20   ` [PATCH v2 00/11] " Andrew Rybchenko
                       ` (5 preceding siblings ...)
  2018-03-25 16:20     ` [PATCH v2 06/11] mempool: remove callback to get capabilities Andrew Rybchenko
@ 2018-03-25 16:20     ` Andrew Rybchenko
  2018-03-25 16:20     ` [PATCH v2 08/11] mempool/octeontx: prepare to remove register memory area op Andrew Rybchenko
                       ` (3 subsequent siblings)
  10 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-25 16:20 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Thomas Monjalon

Move rte_mempool_xmem_size() code to internal helper function
since it is required in two places: deprecated rte_mempool_xmem_size()
and non-deprecated rte_mempool_op_calc_mem_size_default().

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
v1 -> v2:
 - deprecate rte_mempool_populate_iova_tab()
 - add -Wno-deprecated-declarations to fix build errors because of
   rte_mempool_populate_iova_tab() deprecation
 - add @deprecated to deprecated functions description

RFCv2 -> v1:
 - advertise deprecation in release notes
 - factor out default memory size calculation into non-deprecated
   internal function to avoid usage of deprecated function internally
 - remove test for deprecated functions to address build issue because
   of usage of deprecated functions (it is easy to allow usage of
   deprecated function in Makefile, but very complicated in meson)

 doc/guides/rel_notes/deprecation.rst         |  7 -------
 doc/guides/rel_notes/release_18_05.rst       | 11 ++++++++++
 lib/librte_mempool/Makefile                  |  3 +++
 lib/librte_mempool/meson.build               | 12 +++++++++++
 lib/librte_mempool/rte_mempool.c             | 19 ++++++++++++++---
 lib/librte_mempool/rte_mempool.h             | 30 +++++++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops_default.c |  4 ++--
 test/test/test_mempool.c                     | 31 ----------------------------
 8 files changed, 74 insertions(+), 43 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 4deed9a..473330d 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -60,13 +60,6 @@ Deprecation Notices
   - ``rte_eal_mbuf_default_mempool_ops``
 
 * mempool: several API and ABI changes are planned in v18.05.
-  The following functions, introduced for Xen, which is not supported
-  anymore since v17.11, are hard to use, not used anywhere else in DPDK.
-  Therefore they will be deprecated in v18.05 and removed in v18.08:
-
-  - ``rte_mempool_xmem_create``
-  - ``rte_mempool_xmem_size``
-  - ``rte_mempool_xmem_usage``
 
   The following changes are planned:
 
diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index c50f26c..6a8db54 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -74,6 +74,17 @@ API Changes
   Now the new driver callbacks ``calc_mem_size`` and ``populate`` may be
   used to achieve it without specific knowledge in the generic code.
 
+* **Deprecated mempool xmem functions.**
+
+  The following functions, introduced for Xen, which is not supported
+  anymore since v17.11, are hard to use, not used anywhere else in DPDK.
+  Therefore they were deprecated in v18.05 and will be removed in v18.08:
+
+  - ``rte_mempool_xmem_create``
+  - ``rte_mempool_xmem_size``
+  - ``rte_mempool_xmem_usage``
+  - ``rte_mempool_populate_iova_tab``
+
 
 ABI Changes
 -----------
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 072740f..2c46fdd 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -7,6 +7,9 @@ include $(RTE_SDK)/mk/rte.vars.mk
 LIB = librte_mempool.a
 
 CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+# Allow deprecated symbol to use deprecated rte_mempool_populate_iova_tab()
+# from earlier deprecated rte_mempool_populate_phys_tab()
+CFLAGS += -Wno-deprecated-declarations
 LDLIBS += -lrte_eal -lrte_ring
 
 EXPORT_MAP := rte_mempool_version.map
diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build
index 9e3b527..22e912a 100644
--- a/lib/librte_mempool/meson.build
+++ b/lib/librte_mempool/meson.build
@@ -1,6 +1,18 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2017 Intel Corporation
 
+extra_flags = []
+
+# Allow deprecated symbol to use deprecated rte_mempool_populate_iova_tab()
+# from earlier deprecated rte_mempool_populate_phys_tab()
+extra_flags += '-Wno-deprecated-declarations'
+
+foreach flag: extra_flags
+	if cc.has_argument(flag)
+		cflags += flag
+	endif
+endforeach
+
 version = 4
 sources = files('rte_mempool.c', 'rte_mempool_ops.c',
 		'rte_mempool_ops_default.c')
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 40eedde..8c3b0b1 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -204,11 +204,13 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 
 
 /*
- * Calculate maximum amount of memory required to store given number of objects.
+ * Internal function to calculate required memory chunk size shared
+ * by default implementation of the corresponding callback and
+ * deprecated external function.
  */
 size_t
-rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
-		      __rte_unused unsigned int flags)
+rte_mempool_calc_mem_size_helper(uint32_t elt_num, size_t total_elt_sz,
+				 uint32_t pg_shift)
 {
 	size_t obj_per_page, pg_num, pg_sz;
 
@@ -228,6 +230,17 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
 }
 
 /*
+ * Calculate maximum amount of memory required to store given number of objects.
+ */
+size_t
+rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
+		      __rte_unused unsigned int flags)
+{
+	return rte_mempool_calc_mem_size_helper(elt_num, total_elt_sz,
+						pg_shift);
+}
+
+/*
  * Calculate how much memory would be actually required with the
  * given memory footprint to store required number of elements.
  */
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 0b83d5e..9107f5a 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -427,6 +427,28 @@ ssize_t rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
 		size_t *min_chunk_size, size_t *align);
 
 /**
+ * @internal Helper function to calculate memory size required to store
+ * specified number of objects in assumption that the memory buffer will
+ * be aligned at page boundary.
+ *
+ * Note that if object size is bigger than page size, then it assumes
+ * that pages are grouped in subsets of physically continuous pages big
+ * enough to store at least one object.
+ *
+ * @param elt_num
+ *   Number of elements.
+ * @param total_elt_sz
+ *   The size of each element, including header and trailer, as returned
+ *   by rte_mempool_calc_obj_size().
+ * @param pg_shift
+ *   LOG2 of the physical pages size. If set to 0, ignore page boundaries.
+ * @return
+ *   Required memory size aligned at page boundary.
+ */
+size_t rte_mempool_calc_mem_size_helper(uint32_t elt_num, size_t total_elt_sz,
+		uint32_t pg_shift);
+
+/**
  * Function to be called for each populated object.
  *
  * @param[in] mp
@@ -855,6 +877,7 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
 		   int socket_id, unsigned flags);
 
 /**
+ * @deprecated
  * Create a new mempool named *name* in memory.
  *
  * The pool contains n elements of elt_size. Its size is set to n.
@@ -912,6 +935,7 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
  *   The pointer to the new allocated mempool, on success. NULL on error
  *   with rte_errno set appropriately. See rte_mempool_create() for details.
  */
+__rte_deprecated
 struct rte_mempool *
 rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 		unsigned cache_size, unsigned private_data_size,
@@ -1008,6 +1032,7 @@ int rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	void *opaque);
 
 /**
+ * @deprecated
  * Add physical memory for objects in the pool at init
  *
  * Add a virtually contiguous memory chunk in the pool where objects can
@@ -1033,6 +1058,7 @@ int rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
  *   On error, the chunks are not added in the memory list of the
  *   mempool and a negative errno is returned.
  */
+__rte_deprecated
 int rte_mempool_populate_iova_tab(struct rte_mempool *mp, char *vaddr,
 	const rte_iova_t iova[], uint32_t pg_num, uint32_t pg_shift,
 	rte_mempool_memchunk_free_cb_t *free_cb, void *opaque);
@@ -1652,6 +1678,7 @@ uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 	struct rte_mempool_objsz *sz);
 
 /**
+ * @deprecated
  * Get the size of memory required to store mempool elements.
  *
  * Calculate the maximum amount of memory required to store given number
@@ -1674,10 +1701,12 @@ uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  * @return
  *   Required memory size aligned at page boundary.
  */
+__rte_deprecated
 size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
 	uint32_t pg_shift, unsigned int flags);
 
 /**
+ * @deprecated
  * Get the size of memory required to store mempool elements.
  *
  * Calculate how much memory would be actually required with the given
@@ -1705,6 +1734,7 @@ size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
  *   buffer is too small, return a negative value whose absolute value
  *   is the actual number of elements that can be stored in that buffer.
  */
+__rte_deprecated
 ssize_t rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const rte_iova_t iova[], uint32_t pg_num,
 	uint32_t pg_shift, unsigned int flags);
diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c
index 3defc15..fd63ca1 100644
--- a/lib/librte_mempool/rte_mempool_ops_default.c
+++ b/lib/librte_mempool/rte_mempool_ops_default.c
@@ -16,8 +16,8 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
 
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 
-	mem_size = rte_mempool_xmem_size(obj_num, total_elt_sz, pg_shift,
-					 mp->flags);
+	mem_size = rte_mempool_calc_mem_size_helper(obj_num, total_elt_sz,
+						    pg_shift);
 
 	*min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz);
 
diff --git a/test/test/test_mempool.c b/test/test/test_mempool.c
index 63f921e..8d29af2 100644
--- a/test/test/test_mempool.c
+++ b/test/test/test_mempool.c
@@ -444,34 +444,6 @@ test_mempool_same_name_twice_creation(void)
 	return 0;
 }
 
-/*
- * Basic test for mempool_xmem functions.
- */
-static int
-test_mempool_xmem_misc(void)
-{
-	uint32_t elt_num, total_size;
-	size_t sz;
-	ssize_t usz;
-
-	elt_num = MAX_KEEP;
-	total_size = rte_mempool_calc_obj_size(MEMPOOL_ELT_SIZE, 0, NULL);
-	sz = rte_mempool_xmem_size(elt_num, total_size, MEMPOOL_PG_SHIFT_MAX,
-					0);
-
-	usz = rte_mempool_xmem_usage(NULL, elt_num, total_size, 0, 1,
-		MEMPOOL_PG_SHIFT_MAX, 0);
-
-	if (sz != (size_t)usz)  {
-		printf("failure @ %s: rte_mempool_xmem_usage(%u, %u) "
-			"returns: %#zx, while expected: %#zx;\n",
-			__func__, elt_num, total_size, sz, (size_t)usz);
-		return -1;
-	}
-
-	return 0;
-}
-
 static void
 walk_cb(struct rte_mempool *mp, void *userdata __rte_unused)
 {
@@ -596,9 +568,6 @@ test_mempool(void)
 	if (test_mempool_same_name_twice_creation() < 0)
 		goto err;
 
-	if (test_mempool_xmem_misc() < 0)
-		goto err;
-
 	/* test the stack handler */
 	if (test_mempool_basic(mp_stack, 1) < 0)
 		goto err;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v2 08/11] mempool/octeontx: prepare to remove register memory area op
  2018-03-25 16:20   ` [PATCH v2 00/11] " Andrew Rybchenko
                       ` (6 preceding siblings ...)
  2018-03-25 16:20     ` [PATCH v2 07/11] mempool: deprecate xmem functions Andrew Rybchenko
@ 2018-03-25 16:20     ` Andrew Rybchenko
  2018-03-25 16:20     ` [PATCH v2 09/11] mempool/dpaa: " Andrew Rybchenko
                       ` (2 subsequent siblings)
  10 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-25 16:20 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Santosh Shukla, Jerin Jacob

Callback to populate pool objects has all required information and
executed a bit later than register memory area callback.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
v1 -> v2
 - none

 drivers/mempool/octeontx/rte_mempool_octeontx.c | 25 ++++++++++---------------
 1 file changed, 10 insertions(+), 15 deletions(-)

diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index 64ed528..ab94dfe 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -152,26 +152,15 @@ octeontx_fpavf_calc_mem_size(const struct rte_mempool *mp,
 }
 
 static int
-octeontx_fpavf_register_memory_area(const struct rte_mempool *mp,
-				    char *vaddr, rte_iova_t paddr, size_t len)
-{
-	RTE_SET_USED(paddr);
-	uint8_t gpool;
-	uintptr_t pool_bar;
-
-	gpool = octeontx_fpa_bufpool_gpool(mp->pool_id);
-	pool_bar = mp->pool_id & ~(uint64_t)FPA_GPOOL_MASK;
-
-	return octeontx_fpavf_pool_set_range(pool_bar, len, vaddr, gpool);
-}
-
-static int
 octeontx_fpavf_populate(struct rte_mempool *mp, unsigned int max_objs,
 			void *vaddr, rte_iova_t iova, size_t len,
 			rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
 {
 	size_t total_elt_sz;
 	size_t off;
+	uint8_t gpool;
+	uintptr_t pool_bar;
+	int ret;
 
 	if (iova == RTE_BAD_IOVA)
 		return -EINVAL;
@@ -188,6 +177,13 @@ octeontx_fpavf_populate(struct rte_mempool *mp, unsigned int max_objs,
 	iova += off;
 	len -= off;
 
+	gpool = octeontx_fpa_bufpool_gpool(mp->pool_id);
+	pool_bar = mp->pool_id & ~(uint64_t)FPA_GPOOL_MASK;
+
+	ret = octeontx_fpavf_pool_set_range(pool_bar, len, vaddr, gpool);
+	if (ret < 0)
+		return ret;
+
 	return rte_mempool_op_populate_default(mp, max_objs, vaddr, iova, len,
 					       obj_cb, obj_cb_arg);
 }
@@ -199,7 +195,6 @@ static struct rte_mempool_ops octeontx_fpavf_ops = {
 	.enqueue = octeontx_fpavf_enqueue,
 	.dequeue = octeontx_fpavf_dequeue,
 	.get_count = octeontx_fpavf_get_count,
-	.register_memory_area = octeontx_fpavf_register_memory_area,
 	.calc_mem_size = octeontx_fpavf_calc_mem_size,
 	.populate = octeontx_fpavf_populate,
 };
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v2 09/11] mempool/dpaa: prepare to remove register memory area op
  2018-03-25 16:20   ` [PATCH v2 00/11] " Andrew Rybchenko
                       ` (7 preceding siblings ...)
  2018-03-25 16:20     ` [PATCH v2 08/11] mempool/octeontx: prepare to remove register memory area op Andrew Rybchenko
@ 2018-03-25 16:20     ` Andrew Rybchenko
  2018-03-26  7:13       ` Andrew Rybchenko
  2018-03-25 16:20     ` [PATCH v2 10/11] mempool: remove callback to register memory area Andrew Rybchenko
  2018-03-25 16:20     ` [PATCH v2 11/11] mempool: support flushing the default cache of the mempool Andrew Rybchenko
  10 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-25 16:20 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Hemant Agrawal, Shreyansh Jain

Populate mempool driver callback is executed a bit later than
register memory area, provides the same information and will
substitute the later since it gives more flexibility and in addition
to notification about memory area allows to customize how mempool
objects are stored in memory.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
v1 -> v2:
 - fix build error because of prototype mismatch

 drivers/mempool/dpaa/dpaa_mempool.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c
index 7b82f4b..0dcb488 100644
--- a/drivers/mempool/dpaa/dpaa_mempool.c
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -264,10 +264,9 @@ dpaa_mbuf_get_count(const struct rte_mempool *mp)
 }
 
 static int
-dpaa_register_memory_area(const struct rte_mempool *mp,
-			  char *vaddr __rte_unused,
-			  rte_iova_t paddr __rte_unused,
-			  size_t len)
+dpaa_populate(struct rte_mempool *mp, unsigned int max_objs,
+	      char *vaddr, rte_iova_t paddr, size_t len,
+	      rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
 {
 	struct dpaa_bp_info *bp_info;
 	unsigned int total_elt_sz;
@@ -289,7 +288,9 @@ dpaa_register_memory_area(const struct rte_mempool *mp,
 	if (len >= total_elt_sz * mp->size)
 		bp_info->flags |= DPAA_MPOOL_SINGLE_SEGMENT;
 
-	return 0;
+	return rte_mempool_op_populate_default(mp, max_objs, vaddr, paddr, len,
+					       obj_cb, obj_cb_arg);
+
 }
 
 struct rte_mempool_ops dpaa_mpool_ops = {
@@ -299,7 +300,7 @@ struct rte_mempool_ops dpaa_mpool_ops = {
 	.enqueue = dpaa_mbuf_free_bulk,
 	.dequeue = dpaa_mbuf_alloc_bulk,
 	.get_count = dpaa_mbuf_get_count,
-	.register_memory_area = dpaa_register_memory_area,
+	.populate = dpaa_populate,
 };
 
 MEMPOOL_REGISTER_OPS(dpaa_mpool_ops);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v2 10/11] mempool: remove callback to register memory area
  2018-03-25 16:20   ` [PATCH v2 00/11] " Andrew Rybchenko
                       ` (8 preceding siblings ...)
  2018-03-25 16:20     ` [PATCH v2 09/11] mempool/dpaa: " Andrew Rybchenko
@ 2018-03-25 16:20     ` Andrew Rybchenko
  2018-03-25 16:20     ` [PATCH v2 11/11] mempool: support flushing the default cache of the mempool Andrew Rybchenko
  10 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-25 16:20 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ

The callback is not required any more since there is a new callback
to populate objects using provided memory area which provides
the same information.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
v1 -> v2:
 - none

RFCv2 -> v1:
 - advertise ABI changes in release notes

 doc/guides/rel_notes/deprecation.rst       |  1 -
 doc/guides/rel_notes/release_18_05.rst     |  2 ++
 lib/librte_mempool/rte_mempool.c           |  5 -----
 lib/librte_mempool/rte_mempool.h           | 31 ------------------------------
 lib/librte_mempool/rte_mempool_ops.c       | 14 --------------
 lib/librte_mempool/rte_mempool_version.map |  1 -
 6 files changed, 2 insertions(+), 52 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 473330d..5301259 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -63,7 +63,6 @@ Deprecation Notices
 
   The following changes are planned:
 
-  - substitute ``register_memory_area`` with ``populate`` ops.
   - addition of new op to allocate contiguous
     block of objects if underlying driver supports it.
 
diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index 6a8db54..016c4ed 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -108,6 +108,8 @@ ABI Changes
   Callback ``get_capabilities`` has been removed from ``rte_mempool_ops``
   since its features are covered by ``calc_mem_size`` and ``populate``
   callbacks.
+  Callback ``register_memory_area`` has been removed from ``rte_mempool_ops``
+  since the new callback ``populate`` may be used instead of it.
 
 
 Removed Items
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 8c3b0b1..c58bcc6 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -355,11 +355,6 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	if (ret != 0)
 		return ret;
 
-	/* Notify memory area to mempool */
-	ret = rte_mempool_ops_register_memory_area(mp, vaddr, iova, len);
-	if (ret != -ENOTSUP && ret < 0)
-		return ret;
-
 	/* mempool is already populated */
 	if (mp->populated_size >= mp->size)
 		return -ENOSPC;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 9107f5a..314f909 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -371,12 +371,6 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
 typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
 
 /**
- * Notify new memory area to mempool.
- */
-typedef int (*rte_mempool_ops_register_memory_area_t)
-(const struct rte_mempool *mp, char *vaddr, rte_iova_t iova, size_t len);
-
-/**
  * Calculate memory size required to store given number of objects.
  *
  * If mempool objects are not required to be IOVA-contiguous
@@ -514,10 +508,6 @@ struct rte_mempool_ops {
 	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
 	rte_mempool_get_count get_count; /**< Get qty of available objs. */
 	/**
-	 * Notify new memory area to mempool
-	 */
-	rte_mempool_ops_register_memory_area_t register_memory_area;
-	/**
 	 * Optional callback to calculate memory size required to
 	 * store specified number of objects.
 	 */
@@ -639,27 +629,6 @@ unsigned
 rte_mempool_ops_get_count(const struct rte_mempool *mp);
 
 /**
- * @internal wrapper for mempool_ops register_memory_area callback.
- * API to notify the mempool handler when a new memory area is added to pool.
- *
- * @param mp
- *   Pointer to the memory pool.
- * @param vaddr
- *   Pointer to the buffer virtual address.
- * @param iova
- *   Pointer to the buffer IO address.
- * @param len
- *   Pool size.
- * @return
- *   - 0: Success;
- *   - -ENOTSUP - doesn't support register_memory_area ops (valid error case).
- *   - Otherwise, rte_mempool_populate_phys fails thus pool create fails.
- */
-int
-rte_mempool_ops_register_memory_area(const struct rte_mempool *mp,
-				char *vaddr, rte_iova_t iova, size_t len);
-
-/**
  * @internal wrapper for mempool_ops calc_mem_size callback.
  * API to calculate size of memory required to store specified number of
  * object.
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 6ac669a..ea9be1e 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -57,7 +57,6 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->enqueue = h->enqueue;
 	ops->dequeue = h->dequeue;
 	ops->get_count = h->get_count;
-	ops->register_memory_area = h->register_memory_area;
 	ops->calc_mem_size = h->calc_mem_size;
 	ops->populate = h->populate;
 
@@ -99,19 +98,6 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp)
 }
 
 /* wrapper to notify new memory area to external mempool */
-int
-rte_mempool_ops_register_memory_area(const struct rte_mempool *mp, char *vaddr,
-					rte_iova_t iova, size_t len)
-{
-	struct rte_mempool_ops *ops;
-
-	ops = rte_mempool_get_ops(mp->ops_index);
-
-	RTE_FUNC_PTR_OR_ERR_RET(ops->register_memory_area, -ENOTSUP);
-	return ops->register_memory_area(mp, vaddr, iova, len);
-}
-
-/* wrapper to notify new memory area to external mempool */
 ssize_t
 rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
 				uint32_t obj_num, uint32_t pg_shift,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 637f73f..cf375db 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -45,7 +45,6 @@ DPDK_16.07 {
 DPDK_17.11 {
 	global:
 
-	rte_mempool_ops_register_memory_area;
 	rte_mempool_populate_iova;
 	rte_mempool_populate_iova_tab;
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v2 11/11] mempool: support flushing the default cache of the mempool
  2018-03-25 16:20   ` [PATCH v2 00/11] " Andrew Rybchenko
                       ` (9 preceding siblings ...)
  2018-03-25 16:20     ` [PATCH v2 10/11] mempool: remove callback to register memory area Andrew Rybchenko
@ 2018-03-25 16:20     ` Andrew Rybchenko
  10 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-25 16:20 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Mempool get/put API cares about cache itself, but sometimes it is
required to flush the cache explicitly.

The function is moved in the file since it now requires
rte_mempool_default_cache().

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
v1 -> v2:
 - none

 lib/librte_mempool/rte_mempool.h | 36 ++++++++++++++++++++----------------
 1 file changed, 20 insertions(+), 16 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 314f909..3e06ae0 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -1169,22 +1169,6 @@ void
 rte_mempool_cache_free(struct rte_mempool_cache *cache);
 
 /**
- * Flush a user-owned mempool cache to the specified mempool.
- *
- * @param cache
- *   A pointer to the mempool cache.
- * @param mp
- *   A pointer to the mempool.
- */
-static __rte_always_inline void
-rte_mempool_cache_flush(struct rte_mempool_cache *cache,
-			struct rte_mempool *mp)
-{
-	rte_mempool_ops_enqueue_bulk(mp, cache->objs, cache->len);
-	cache->len = 0;
-}
-
-/**
  * Get a pointer to the per-lcore default mempool cache.
  *
  * @param mp
@@ -1207,6 +1191,26 @@ rte_mempool_default_cache(struct rte_mempool *mp, unsigned lcore_id)
 }
 
 /**
+ * Flush a user-owned mempool cache to the specified mempool.
+ *
+ * @param cache
+ *   A pointer to the mempool cache.
+ * @param mp
+ *   A pointer to the mempool.
+ */
+static __rte_always_inline void
+rte_mempool_cache_flush(struct rte_mempool_cache *cache,
+			struct rte_mempool *mp)
+{
+	if (cache == NULL)
+		cache = rte_mempool_default_cache(mp, rte_lcore_id());
+	if (cache == NULL || cache->len == 0)
+		return;
+	rte_mempool_ops_enqueue_bulk(mp, cache->objs, cache->len);
+	cache->len = 0;
+}
+
+/**
  * @internal Put several objects back in the mempool; used internally.
  * @param mp
  *   A pointer to the mempool structure.
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* Re: [PATCH v2 09/11] mempool/dpaa: prepare to remove register memory area op
  2018-03-25 16:20     ` [PATCH v2 09/11] mempool/dpaa: " Andrew Rybchenko
@ 2018-03-26  7:13       ` Andrew Rybchenko
  0 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-26  7:13 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Hemant Agrawal, Shreyansh Jain

On 03/25/2018 07:20 PM, Andrew Rybchenko wrote:
> Populate mempool driver callback is executed a bit later than
> register memory area, provides the same information and will
> substitute the later since it gives more flexibility and in addition
> to notification about memory area allows to customize how mempool
> objects are stored in memory.
>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> ---
> v1 -> v2:
>   - fix build error because of prototype mismatch
>
>   drivers/mempool/dpaa/dpaa_mempool.c | 13 +++++++------
>   1 file changed, 7 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c
> index 7b82f4b..0dcb488 100644
> --- a/drivers/mempool/dpaa/dpaa_mempool.c
> +++ b/drivers/mempool/dpaa/dpaa_mempool.c
> @@ -264,10 +264,9 @@ dpaa_mbuf_get_count(const struct rte_mempool *mp)
>   }
>   
>   static int
> -dpaa_register_memory_area(const struct rte_mempool *mp,
> -			  char *vaddr __rte_unused,
> -			  rte_iova_t paddr __rte_unused,
> -			  size_t len)
> +dpaa_populate(struct rte_mempool *mp, unsigned int max_objs,
> +	      char *vaddr, rte_iova_t paddr, size_t len,

Self NACK, 'void *vaddr' must be above

> +	      rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
>   {
>   	struct dpaa_bp_info *bp_info;
>   	unsigned int total_elt_sz;
> @@ -289,7 +288,9 @@ dpaa_register_memory_area(const struct rte_mempool *mp,
>   	if (len >= total_elt_sz * mp->size)
>   		bp_info->flags |= DPAA_MPOOL_SINGLE_SEGMENT;
>   
> -	return 0;
> +	return rte_mempool_op_populate_default(mp, max_objs, vaddr, paddr, len,
> +					       obj_cb, obj_cb_arg);
> +
>   }
>   
>   struct rte_mempool_ops dpaa_mpool_ops = {
> @@ -299,7 +300,7 @@ struct rte_mempool_ops dpaa_mpool_ops = {
>   	.enqueue = dpaa_mbuf_free_bulk,
>   	.dequeue = dpaa_mbuf_alloc_bulk,
>   	.get_count = dpaa_mbuf_get_count,
> -	.register_memory_area = dpaa_register_memory_area,
> +	.populate = dpaa_populate,
>   };
>   
>   MEMPOOL_REGISTER_OPS(dpaa_mpool_ops);

^ permalink raw reply	[flat|nested] 197+ messages in thread

* [PATCH v3 00/11] mempool: prepare to add bucket driver
  2018-01-23 13:15 ` [RFC v2 00/17] " Andrew Rybchenko
                     ` (19 preceding siblings ...)
  2018-03-25 16:20   ` [PATCH v2 00/11] " Andrew Rybchenko
@ 2018-03-26 16:09   ` Andrew Rybchenko
  2018-03-26 16:09     ` [PATCH v3 01/11] mempool: fix memhdr leak when no objects are populated Andrew Rybchenko
                       ` (10 more replies)
  2018-03-26 16:12   ` [PATCH v1 0/6] mempool: add bucket driver Andrew Rybchenko
  21 siblings, 11 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-26 16:09 UTC (permalink / raw)
  To: dev
  Cc: Olivier MATZ, Thomas Monjalon, Anatoly Burakov, Santosh Shukla,
	Jerin Jacob, Hemant Agrawal, Shreyansh Jain

The patch series should be applied on top of [7].

The initial patch series [1] is split into two to simplify processing.
The second series relies on this one and will add bucket mempool driver
and related ops.

The patch series has generic enhancements suggested by Olivier.
Basically it adds driver callbacks to calculate required memory size and
to populate objects using provided memory area. It allows to remove
so-called capability flags used before to tell generic code how to
allocate and slice allocated memory into mempool objects.
Clean up which removes get_capabilities and register_memory_area is
not strictly required, but I think right thing to do.
Existing mempool drivers are updated.

rte_mempool_populate_iova_tab() is also deprecated in v2 as agreed in [2].
Unfortunately it requires addition of -Wno-deprecated-declarations flag
to librte_mempool since the function is used by deprecated earlier
rte_mempool_populate_phys_tab(). If the later may be removed in the
release, we can avoid addition of the flag to allow usage of deprecated
functions.

One open question remains from previous review [3].

The patch series interfere with memory hotplug for DPDK [4] ([5] to be
precise). So, rebase may be required.

A new patch is added to the series to rename MEMPOOL_F_NO_PHYS_CONTIG
as MEMPOOL_F_NO_IOVA_CONTIG as agreed in [6].
MEMPOOL_F_CAPA_PHYS_CONTIG is not renamed since it removed in this
patchset.

It breaks ABI since changes rte_mempool_ops. Also it removes
rte_mempool_ops_register_memory_area() and
rte_mempool_ops_get_capabilities() since corresponding callbacks are
removed.

Internal global functions are not listed in map file since it is not
a part of external API.

[1] https://dpdk.org/ml/archives/dev/2018-January/088698.html
[2] https://dpdk.org/ml/archives/dev/2018-March/093186.html
[3] https://dpdk.org/ml/archives/dev/2018-March/093329.html
[4] https://dpdk.org/ml/archives/dev/2018-March/092070.html
[5] https://dpdk.org/ml/archives/dev/2018-March/092088.html
[6] https://dpdk.org/ml/archives/dev/2018-March/093345.html
[7] https://dpdk.org/ml/archives/dev/2018-March/093196.html

v2 -> v3:
  - fix build error in mempool/dpaa: prepare to remove register memory area op

v1 -> v2:
  - deprecate rte_mempool_populate_iova_tab()
  - add patch to fix memory leak if no objects are populated
  - add patch to rename MEMPOOL_F_NO_PHYS_CONTIG
  - minor fixes (typos, blank line at the end of file)
  - highlight meaning of min_chunk_size (when it is virtual or
    physical contiguous)
  - make sure that mempool is initialized in rte_mempool_populate_anon()
  - move patch to ensure that mempool is initialized earlier in the series

RFCv2 -> v1:
  - split the series in two
  - squash octeontx patches which implement calc_mem_size and populate
    callbacks into the patch which removes get_capabilities since it is
    the easiest way to untangle the tangle of tightly related library
    functions and flags advertised by the driver
  - consistently name default callbacks
  - move default callbacks to dedicated file
  - see detailed description in patches

RFCv1 -> RFCv2:
  - add driver ops to calculate required memory size and populate
    mempool objects, remove extra flags which were required before
    to control it
  - transition of octeontx and dpaa drivers to the new callbacks
  - change info API to get information from driver required to
    API user to know contiguous block size
  - remove get_capabilities (not required any more and may be
    substituted with more in info get API)
  - remove register_memory_area since it is substituted with
    populate callback which can do more
  - use SPDX tags
  - avoid all objects affinity to single lcore
  - fix bucket get_count
  - deprecate XMEM API
  - avoid introduction of a new function to flush cache
  - fix NO_CACHE_ALIGN case in bucket mempool

Andrew Rybchenko (9):
  mempool: fix memhdr leak when no objects are populated
  mempool: rename flag to control IOVA-contiguous objects
  mempool: add op to calculate memory size to be allocated
  mempool: add op to populate objects using provided memory
  mempool: remove callback to get capabilities
  mempool: deprecate xmem functions
  mempool/octeontx: prepare to remove register memory area op
  mempool/dpaa: prepare to remove register memory area op
  mempool: remove callback to register memory area

Artem V. Andreev (2):
  mempool: ensure the mempool is initialized before populating
  mempool: support flushing the default cache of the mempool

 doc/guides/rel_notes/deprecation.rst            |  12 +-
 doc/guides/rel_notes/release_18_05.rst          |  33 ++-
 drivers/mempool/dpaa/dpaa_mempool.c             |  13 +-
 drivers/mempool/octeontx/rte_mempool_octeontx.c |  64 ++++--
 drivers/net/thunderx/nicvf_ethdev.c             |   2 +-
 lib/librte_mempool/Makefile                     |   6 +-
 lib/librte_mempool/meson.build                  |  17 +-
 lib/librte_mempool/rte_mempool.c                | 179 ++++++++-------
 lib/librte_mempool/rte_mempool.h                | 280 +++++++++++++++++-------
 lib/librte_mempool/rte_mempool_ops.c            |  37 ++--
 lib/librte_mempool/rte_mempool_ops_default.c    |  51 +++++
 lib/librte_mempool/rte_mempool_version.map      |  10 +-
 test/test/test_mempool.c                        |  31 ---
 13 files changed, 485 insertions(+), 250 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_ops_default.c

-- 
2.7.4

^ permalink raw reply	[flat|nested] 197+ messages in thread

* [PATCH v3 01/11] mempool: fix memhdr leak when no objects are populated
  2018-03-26 16:09   ` [PATCH v3 00/11] mempool: prepare to add bucket driver Andrew Rybchenko
@ 2018-03-26 16:09     ` Andrew Rybchenko
  2018-04-06 15:50       ` Olivier Matz
  2018-03-26 16:09     ` [PATCH v3 02/11] mempool: rename flag to control IOVA-contiguous objects Andrew Rybchenko
                       ` (9 subsequent siblings)
  10 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-26 16:09 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, stable

Fixes: 84121f197187 ("mempool: store memory chunks in a list")
Cc: stable@dpdk.org

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
v2 -> v3:
 - none

v1 -> v2:
 - added in v2 as discussed in [1]

[1] https://dpdk.org/ml/archives/dev/2018-March/093329.html

 lib/librte_mempool/rte_mempool.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 54f7f4b..80bf941 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -408,12 +408,18 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	}
 
 	/* not enough room to store one object */
-	if (i == 0)
-		return -EINVAL;
+	if (i == 0) {
+		ret = -EINVAL;
+		goto fail;
+	}
 
 	STAILQ_INSERT_TAIL(&mp->mem_list, memhdr, next);
 	mp->nb_mem_chunks++;
 	return i;
+
+fail:
+	rte_free(memhdr);
+	return ret;
 }
 
 int
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v3 02/11] mempool: rename flag to control IOVA-contiguous objects
  2018-03-26 16:09   ` [PATCH v3 00/11] mempool: prepare to add bucket driver Andrew Rybchenko
  2018-03-26 16:09     ` [PATCH v3 01/11] mempool: fix memhdr leak when no objects are populated Andrew Rybchenko
@ 2018-03-26 16:09     ` Andrew Rybchenko
  2018-04-06 15:50       ` Olivier Matz
  2018-03-26 16:09     ` [PATCH v3 03/11] mempool: ensure the mempool is initialized before populating Andrew Rybchenko
                       ` (8 subsequent siblings)
  10 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-26 16:09 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ

Flag MEMPOOL_F_NO_PHYS_CONTIG is renamed as MEMPOOL_F_NO_IOVA_CONTIG
to follow IO memory contiguos terminology.
MEMPOOL_F_NO_PHYS_CONTIG is kept for backward compatibility and
deprecated.

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
v2 -> v3:
 - none

v1 -> v2:
 - added in v2 as discussed in [1]

[1] https://dpdk.org/ml/archives/dev/2018-March/093345.html

 drivers/net/thunderx/nicvf_ethdev.c | 2 +-
 lib/librte_mempool/rte_mempool.c    | 6 +++---
 lib/librte_mempool/rte_mempool.h    | 9 +++++----
 3 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 067f224..f3be744 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1308,7 +1308,7 @@ nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 	}
 
 	/* Mempool memory must be physically contiguous */
-	if (mp->flags & MEMPOOL_F_NO_PHYS_CONTIG) {
+	if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG) {
 		PMD_INIT_LOG(ERR, "Mempool memory must be physically contiguous");
 		return -EINVAL;
 	}
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 80bf941..6ffa795 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -446,7 +446,7 @@ rte_mempool_populate_iova_tab(struct rte_mempool *mp, char *vaddr,
 	if (mp->nb_mem_chunks != 0)
 		return -EEXIST;
 
-	if (mp->flags & MEMPOOL_F_NO_PHYS_CONTIG)
+	if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG)
 		return rte_mempool_populate_iova(mp, vaddr, RTE_BAD_IOVA,
 			pg_num * pg_sz, free_cb, opaque);
 
@@ -500,7 +500,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 	if (RTE_ALIGN_CEIL(len, pg_sz) != len)
 		return -EINVAL;
 
-	if (mp->flags & MEMPOOL_F_NO_PHYS_CONTIG)
+	if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG)
 		return rte_mempool_populate_iova(mp, addr, RTE_BAD_IOVA,
 			len, free_cb, opaque);
 
@@ -602,7 +602,7 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 			goto fail;
 		}
 
-		if (mp->flags & MEMPOOL_F_NO_PHYS_CONTIG)
+		if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG)
 			iova = RTE_BAD_IOVA;
 		else
 			iova = mz->iova;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 8b1b7f7..e531a15 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -244,7 +244,8 @@ struct rte_mempool {
 #define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
-#define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
+#define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 /**< Don't need IOVA contiguous objs. */
+#define MEMPOOL_F_NO_PHYS_CONTIG MEMPOOL_F_NO_IOVA_CONTIG /* deprecated */
 /**
  * This capability flag is advertised by a mempool handler, if the whole
  * memory area containing the objects must be physically contiguous.
@@ -710,8 +711,8 @@ typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *);
  *   - MEMPOOL_F_SC_GET: If this flag is set, the default behavior
  *     when using rte_mempool_get() or rte_mempool_get_bulk() is
  *     "single-consumer". Otherwise, it is "multi-consumers".
- *   - MEMPOOL_F_NO_PHYS_CONTIG: If set, allocated objects won't
- *     necessarily be contiguous in physical memory.
+ *   - MEMPOOL_F_NO_IOVA_CONTIG: If set, allocated objects won't
+ *     necessarily be contiguous in IO memory.
  * @return
  *   The pointer to the new allocated mempool, on success. NULL on error
  *   with rte_errno set appropriately. Possible rte_errno values include:
@@ -1439,7 +1440,7 @@ rte_mempool_empty(const struct rte_mempool *mp)
  *   A pointer (virtual address) to the element of the pool.
  * @return
  *   The IO address of the elt element.
- *   If the mempool was created with MEMPOOL_F_NO_PHYS_CONTIG, the
+ *   If the mempool was created with MEMPOOL_F_NO_IOVA_CONTIG, the
  *   returned value is RTE_BAD_IOVA.
  */
 static inline rte_iova_t
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v3 03/11] mempool: ensure the mempool is initialized before populating
  2018-03-26 16:09   ` [PATCH v3 00/11] mempool: prepare to add bucket driver Andrew Rybchenko
  2018-03-26 16:09     ` [PATCH v3 01/11] mempool: fix memhdr leak when no objects are populated Andrew Rybchenko
  2018-03-26 16:09     ` [PATCH v3 02/11] mempool: rename flag to control IOVA-contiguous objects Andrew Rybchenko
@ 2018-03-26 16:09     ` Andrew Rybchenko
  2018-04-04 15:06       ` santosh
  2018-04-06 15:50       ` Olivier Matz
  2018-03-26 16:09     ` [PATCH v3 04/11] mempool: add op to calculate memory size to be allocated Andrew Rybchenko
                       ` (7 subsequent siblings)
  10 siblings, 2 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-26 16:09 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Callback to calculate required memory area size may require mempool
driver data to be already allocated and initialized.

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
v2 -> v3:
 - none

v1 -> v2:
 - add init check to mempool_ops_alloc_once()
 - move ealier in the patch series since it is required when driver
   ops are called and it is better to have it before new ops are added

RFCv2 -> v1:
 - rename helper function as mempool_ops_alloc_once()

 lib/librte_mempool/rte_mempool.c | 33 ++++++++++++++++++++++++++-------
 1 file changed, 26 insertions(+), 7 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 6ffa795..d8e3720 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -323,6 +323,21 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
 	}
 }
 
+static int
+mempool_ops_alloc_once(struct rte_mempool *mp)
+{
+	int ret;
+
+	/* create the internal ring if not already done */
+	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
+		ret = rte_mempool_ops_alloc(mp);
+		if (ret != 0)
+			return ret;
+		mp->flags |= MEMPOOL_F_POOL_CREATED;
+	}
+	return 0;
+}
+
 /* Add objects in the pool, using a physically contiguous memory
  * zone. Return the number of objects added, or a negative value
  * on error.
@@ -339,13 +354,9 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	struct rte_mempool_memhdr *memhdr;
 	int ret;
 
-	/* create the internal ring if not already done */
-	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
-		ret = rte_mempool_ops_alloc(mp);
-		if (ret != 0)
-			return ret;
-		mp->flags |= MEMPOOL_F_POOL_CREATED;
-	}
+	ret = mempool_ops_alloc_once(mp);
+	if (ret != 0)
+		return ret;
 
 	/* Notify memory area to mempool */
 	ret = rte_mempool_ops_register_memory_area(mp, vaddr, iova, len);
@@ -556,6 +567,10 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	unsigned int mp_flags;
 	int ret;
 
+	ret = mempool_ops_alloc_once(mp);
+	if (ret != 0)
+		return ret;
+
 	/* mempool must not be populated */
 	if (mp->nb_mem_chunks != 0)
 		return -EEXIST;
@@ -667,6 +682,10 @@ rte_mempool_populate_anon(struct rte_mempool *mp)
 		return 0;
 	}
 
+	ret = mempool_ops_alloc_once(mp);
+	if (ret != 0)
+		return ret;
+
 	/* get chunk of virtually continuous memory */
 	size = get_anon_size(mp);
 	addr = mmap(NULL, size, PROT_READ | PROT_WRITE,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v3 04/11] mempool: add op to calculate memory size to be allocated
  2018-03-26 16:09   ` [PATCH v3 00/11] mempool: prepare to add bucket driver Andrew Rybchenko
                       ` (2 preceding siblings ...)
  2018-03-26 16:09     ` [PATCH v3 03/11] mempool: ensure the mempool is initialized before populating Andrew Rybchenko
@ 2018-03-26 16:09     ` Andrew Rybchenko
  2018-04-04 15:08       ` santosh
                         ` (2 more replies)
  2018-03-26 16:09     ` [PATCH v3 05/11] mempool: add op to populate objects using provided memory Andrew Rybchenko
                       ` (6 subsequent siblings)
  10 siblings, 3 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-26 16:09 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ

Size of memory chunk required to populate mempool objects depends
on how objects are stored in the memory. Different mempool drivers
may have different requirements and a new operation allows to
calculate memory size in accordance with driver requirements and
advertise requirements on minimum memory chunk size and alignment
in a generic way.

Bump ABI version since the patch breaks it.

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
v2 -> v3:
 - none

v1 -> v2:
 - clarify min_chunk_size meaning
 - rebase on top of patch series which fixes library version in meson
   build

RFCv2 -> v1:
 - move default calc_mem_size callback to rte_mempool_ops_default.c
 - add ABI changes to release notes
 - name default callback consistently: rte_mempool_op_<callback>_default()
 - bump ABI version since it is the first patch which breaks ABI
 - describe default callback behaviour in details
 - avoid introduction of internal function to cope with deprecation
   (keep it to deprecation patch)
 - move cache-line or page boundary chunk alignment to default callback
 - highlight that min_chunk_size and align parameters are output only

 doc/guides/rel_notes/deprecation.rst         |  3 +-
 doc/guides/rel_notes/release_18_05.rst       |  7 ++-
 lib/librte_mempool/Makefile                  |  3 +-
 lib/librte_mempool/meson.build               |  5 +-
 lib/librte_mempool/rte_mempool.c             | 43 +++++++-------
 lib/librte_mempool/rte_mempool.h             | 86 +++++++++++++++++++++++++++-
 lib/librte_mempool/rte_mempool_ops.c         | 18 ++++++
 lib/librte_mempool/rte_mempool_ops_default.c | 38 ++++++++++++
 lib/librte_mempool/rte_mempool_version.map   |  7 +++
 9 files changed, 182 insertions(+), 28 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_ops_default.c

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 6594585..e02d4ca 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -72,8 +72,7 @@ Deprecation Notices
 
   - removal of ``get_capabilities`` mempool ops and related flags.
   - substitute ``register_memory_area`` with ``populate`` ops.
-  - addition of new ops to customize required memory chunk calculation,
-    customize objects population and allocate contiguous
+  - addition of new ops to customize objects population and allocate contiguous
     block of objects if underlying driver supports it.
 
 * mbuf: The control mbuf API will be removed in v18.05. The impacted
diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index f2525bb..59583ea 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -80,6 +80,11 @@ ABI Changes
    Also, make sure to start the actual text at the margin.
    =========================================================
 
+* **Changed rte_mempool_ops structure.**
+
+  A new callback ``calc_mem_size`` has been added to ``rte_mempool_ops``
+  to allow to customize required memory size calculation.
+
 
 Removed Items
 -------------
@@ -152,7 +157,7 @@ The libraries prepended with a plus sign were incremented in this version.
      librte_latencystats.so.1
      librte_lpm.so.2
      librte_mbuf.so.3
-     librte_mempool.so.3
+   + librte_mempool.so.4
    + librte_meter.so.2
      librte_metrics.so.1
      librte_net.so.1
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 24e735a..072740f 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -11,11 +11,12 @@ LDLIBS += -lrte_eal -lrte_ring
 
 EXPORT_MAP := rte_mempool_version.map
 
-LIBABIVER := 3
+LIBABIVER := 4
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops_default.c
 # install includes
 SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
 
diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build
index 712720f..9e3b527 100644
--- a/lib/librte_mempool/meson.build
+++ b/lib/librte_mempool/meson.build
@@ -1,7 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2017 Intel Corporation
 
-version = 3
-sources = files('rte_mempool.c', 'rte_mempool_ops.c')
+version = 4
+sources = files('rte_mempool.c', 'rte_mempool_ops.c',
+		'rte_mempool_ops_default.c')
 headers = files('rte_mempool.h')
 deps += ['ring']
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index d8e3720..dd2d0fe 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -561,10 +561,10 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	unsigned int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	const struct rte_memzone *mz;
-	size_t size, total_elt_sz, align, pg_sz, pg_shift;
+	ssize_t mem_size;
+	size_t align, pg_sz, pg_shift;
 	rte_iova_t iova;
 	unsigned mz_id, n;
-	unsigned int mp_flags;
 	int ret;
 
 	ret = mempool_ops_alloc_once(mp);
@@ -575,29 +575,23 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	if (mp->nb_mem_chunks != 0)
 		return -EEXIST;
 
-	/* Get mempool capabilities */
-	mp_flags = 0;
-	ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
-	if ((ret < 0) && (ret != -ENOTSUP))
-		return ret;
-
-	/* update mempool capabilities */
-	mp->flags |= mp_flags;
-
 	if (rte_eal_has_hugepages()) {
 		pg_shift = 0; /* not needed, zone is physically contiguous */
 		pg_sz = 0;
-		align = RTE_CACHE_LINE_SIZE;
 	} else {
 		pg_sz = getpagesize();
 		pg_shift = rte_bsf32(pg_sz);
-		align = pg_sz;
 	}
 
-	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 	for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) {
-		size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift,
-						mp->flags);
+		size_t min_chunk_size;
+
+		mem_size = rte_mempool_ops_calc_mem_size(mp, n, pg_shift,
+				&min_chunk_size, &align);
+		if (mem_size < 0) {
+			ret = mem_size;
+			goto fail;
+		}
 
 		ret = snprintf(mz_name, sizeof(mz_name),
 			RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id);
@@ -606,7 +600,7 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 			goto fail;
 		}
 
-		mz = rte_memzone_reserve_aligned(mz_name, size,
+		mz = rte_memzone_reserve_aligned(mz_name, mem_size,
 			mp->socket_id, mz_flags, align);
 		/* not enough memory, retry with the biggest zone we have */
 		if (mz == NULL)
@@ -617,6 +611,12 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 			goto fail;
 		}
 
+		if (mz->len < min_chunk_size) {
+			rte_memzone_free(mz);
+			ret = -ENOMEM;
+			goto fail;
+		}
+
 		if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG)
 			iova = RTE_BAD_IOVA;
 		else
@@ -649,13 +649,14 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 static size_t
 get_anon_size(const struct rte_mempool *mp)
 {
-	size_t size, total_elt_sz, pg_sz, pg_shift;
+	size_t size, pg_sz, pg_shift;
+	size_t min_chunk_size;
+	size_t align;
 
 	pg_sz = getpagesize();
 	pg_shift = rte_bsf32(pg_sz);
-	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
-	size = rte_mempool_xmem_size(mp->size, total_elt_sz, pg_shift,
-					mp->flags);
+	size = rte_mempool_ops_calc_mem_size(mp, mp->size, pg_shift,
+					     &min_chunk_size, &align);
 
 	return size;
 }
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index e531a15..191255d 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -400,6 +400,62 @@ typedef int (*rte_mempool_get_capabilities_t)(const struct rte_mempool *mp,
 typedef int (*rte_mempool_ops_register_memory_area_t)
 (const struct rte_mempool *mp, char *vaddr, rte_iova_t iova, size_t len);
 
+/**
+ * Calculate memory size required to store given number of objects.
+ *
+ * If mempool objects are not required to be IOVA-contiguous
+ * (the flag MEMPOOL_F_NO_IOVA_CONTIG is set), min_chunk_size defines
+ * virtually contiguous chunk size. Otherwise, if mempool objects must
+ * be IOVA-contiguous (the flag MEMPOOL_F_NO_IOVA_CONTIG is clear),
+ * min_chunk_size defines IOVA-contiguous chunk size.
+ *
+ * @param[in] mp
+ *   Pointer to the memory pool.
+ * @param[in] obj_num
+ *   Number of objects.
+ * @param[in] pg_shift
+ *   LOG2 of the physical pages size. If set to 0, ignore page boundaries.
+ * @param[out] min_chunk_size
+ *   Location for minimum size of the memory chunk which may be used to
+ *   store memory pool objects.
+ * @param[out] align
+ *   Location for required memory chunk alignment.
+ * @return
+ *   Required memory size aligned at page boundary.
+ */
+typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp,
+		uint32_t obj_num,  uint32_t pg_shift,
+		size_t *min_chunk_size, size_t *align);
+
+/**
+ * Default way to calculate memory size required to store given number of
+ * objects.
+ *
+ * If page boundaries may be ignored, it is just a product of total
+ * object size including header and trailer and number of objects.
+ * Otherwise, it is a number of pages required to store given number of
+ * objects without crossing page boundary.
+ *
+ * Note that if object size is bigger than page size, then it assumes
+ * that pages are grouped in subsets of physically continuous pages big
+ * enough to store at least one object.
+ *
+ * If mempool driver requires object addresses to be block size aligned
+ * (MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS), space for one extra element is
+ * reserved to be able to meet the requirement.
+ *
+ * Minimum size of memory chunk is either all required space, if
+ * capabilities say that whole memory area must be physically contiguous
+ * (MEMPOOL_F_CAPA_PHYS_CONTIG), or a maximum of the page size and total
+ * element size.
+ *
+ * Required memory chunk alignment is a maximum of page size and cache
+ * line size.
+ */
+ssize_t rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
+		uint32_t obj_num, uint32_t pg_shift,
+		size_t *min_chunk_size, size_t *align);
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -416,6 +472,11 @@ struct rte_mempool_ops {
 	 * Notify new memory area to mempool
 	 */
 	rte_mempool_ops_register_memory_area_t register_memory_area;
+	/**
+	 * Optional callback to calculate memory size required to
+	 * store specified number of objects.
+	 */
+	rte_mempool_calc_mem_size_t calc_mem_size;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -565,6 +626,29 @@ rte_mempool_ops_register_memory_area(const struct rte_mempool *mp,
 				char *vaddr, rte_iova_t iova, size_t len);
 
 /**
+ * @internal wrapper for mempool_ops calc_mem_size callback.
+ * API to calculate size of memory required to store specified number of
+ * object.
+ *
+ * @param[in] mp
+ *   Pointer to the memory pool.
+ * @param[in] obj_num
+ *   Number of objects.
+ * @param[in] pg_shift
+ *   LOG2 of the physical pages size. If set to 0, ignore page boundaries.
+ * @param[out] min_chunk_size
+ *   Location for minimum size of the memory chunk which may be used to
+ *   store memory pool objects.
+ * @param[out] align
+ *   Location for required memory chunk alignment.
+ * @return
+ *   Required memory size aligned at page boundary.
+ */
+ssize_t rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
+				      uint32_t obj_num, uint32_t pg_shift,
+				      size_t *min_chunk_size, size_t *align);
+
+/**
  * @internal wrapper for mempool_ops free callback.
  *
  * @param mp
@@ -1534,7 +1618,7 @@ uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  * of objects. Assume that the memory buffer will be aligned at page
  * boundary.
  *
- * Note that if object size is bigger then page size, then it assumes
+ * Note that if object size is bigger than page size, then it assumes
  * that pages are grouped in subsets of physically continuous pages big
  * enough to store at least one object.
  *
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 0732255..26908cc 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -59,6 +59,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->get_count = h->get_count;
 	ops->get_capabilities = h->get_capabilities;
 	ops->register_memory_area = h->register_memory_area;
+	ops->calc_mem_size = h->calc_mem_size;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -123,6 +124,23 @@ rte_mempool_ops_register_memory_area(const struct rte_mempool *mp, char *vaddr,
 	return ops->register_memory_area(mp, vaddr, iova, len);
 }
 
+/* wrapper to notify new memory area to external mempool */
+ssize_t
+rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
+				uint32_t obj_num, uint32_t pg_shift,
+				size_t *min_chunk_size, size_t *align)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+
+	if (ops->calc_mem_size == NULL)
+		return rte_mempool_op_calc_mem_size_default(mp, obj_num,
+				pg_shift, min_chunk_size, align);
+
+	return ops->calc_mem_size(mp, obj_num, pg_shift, min_chunk_size, align);
+}
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c
new file mode 100644
index 0000000..57fe79b
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_ops_default.c
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016 Intel Corporation.
+ * Copyright(c) 2016 6WIND S.A.
+ * Copyright(c) 2018 Solarflare Communications Inc.
+ */
+
+#include <rte_mempool.h>
+
+ssize_t
+rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
+				     uint32_t obj_num, uint32_t pg_shift,
+				     size_t *min_chunk_size, size_t *align)
+{
+	unsigned int mp_flags;
+	int ret;
+	size_t total_elt_sz;
+	size_t mem_size;
+
+	/* Get mempool capabilities */
+	mp_flags = 0;
+	ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
+	if ((ret < 0) && (ret != -ENOTSUP))
+		return ret;
+
+	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
+
+	mem_size = rte_mempool_xmem_size(obj_num, total_elt_sz, pg_shift,
+					 mp->flags | mp_flags);
+
+	if (mp_flags & MEMPOOL_F_CAPA_PHYS_CONTIG)
+		*min_chunk_size = mem_size;
+	else
+		*min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz);
+
+	*align = RTE_MAX((size_t)RTE_CACHE_LINE_SIZE, (size_t)1 << pg_shift);
+
+	return mem_size;
+}
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 62b76f9..cb38189 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -51,3 +51,10 @@ DPDK_17.11 {
 	rte_mempool_populate_iova_tab;
 
 } DPDK_16.07;
+
+DPDK_18.05 {
+	global:
+
+	rte_mempool_op_calc_mem_size_default;
+
+} DPDK_17.11;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v3 05/11] mempool: add op to populate objects using provided memory
  2018-03-26 16:09   ` [PATCH v3 00/11] mempool: prepare to add bucket driver Andrew Rybchenko
                       ` (3 preceding siblings ...)
  2018-03-26 16:09     ` [PATCH v3 04/11] mempool: add op to calculate memory size to be allocated Andrew Rybchenko
@ 2018-03-26 16:09     ` Andrew Rybchenko
  2018-04-04 15:09       ` santosh
  2018-04-06 15:51       ` Olivier Matz
  2018-03-26 16:09     ` [PATCH v3 06/11] mempool: remove callback to get capabilities Andrew Rybchenko
                       ` (5 subsequent siblings)
  10 siblings, 2 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-26 16:09 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ

The callback allows to customize how objects are stored in the
memory chunk. Default implementation of the callback which simply
puts objects one by one is available.

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
v2 -> v3:
 - none

v1 -> v2:
 - fix memory leak if off is bigger than len

RFCv2 -> v1:
 - advertise ABI changes in release notes
 - use consistent name for default callback:
   rte_mempool_op_<callback>_default()
 - add opaque data pointer to populated object callback
 - move default callback to dedicated file

 doc/guides/rel_notes/deprecation.rst         |  2 +-
 doc/guides/rel_notes/release_18_05.rst       |  2 +
 lib/librte_mempool/rte_mempool.c             | 23 ++++---
 lib/librte_mempool/rte_mempool.h             | 90 ++++++++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c         | 21 +++++++
 lib/librte_mempool/rte_mempool_ops_default.c | 24 ++++++++
 lib/librte_mempool/rte_mempool_version.map   |  1 +
 7 files changed, 149 insertions(+), 14 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index e02d4ca..c06fc67 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -72,7 +72,7 @@ Deprecation Notices
 
   - removal of ``get_capabilities`` mempool ops and related flags.
   - substitute ``register_memory_area`` with ``populate`` ops.
-  - addition of new ops to customize objects population and allocate contiguous
+  - addition of new op to allocate contiguous
     block of objects if underlying driver supports it.
 
 * mbuf: The control mbuf API will be removed in v18.05. The impacted
diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index 59583ea..abaefe5 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -84,6 +84,8 @@ ABI Changes
 
   A new callback ``calc_mem_size`` has been added to ``rte_mempool_ops``
   to allow to customize required memory size calculation.
+  A new callback ``populate`` has been added to ``rte_mempool_ops``
+  to allow to customize objects population.
 
 
 Removed Items
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index dd2d0fe..d917dc7 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -99,7 +99,8 @@ static unsigned optimize_object_size(unsigned obj_size)
 }
 
 static void
-mempool_add_elem(struct rte_mempool *mp, void *obj, rte_iova_t iova)
+mempool_add_elem(struct rte_mempool *mp, __rte_unused void *opaque,
+		 void *obj, rte_iova_t iova)
 {
 	struct rte_mempool_objhdr *hdr;
 	struct rte_mempool_objtlr *tlr __rte_unused;
@@ -116,9 +117,6 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, rte_iova_t iova)
 	tlr = __mempool_get_trailer(obj);
 	tlr->cookie = RTE_MEMPOOL_TRAILER_COOKIE;
 #endif
-
-	/* enqueue in ring */
-	rte_mempool_ops_enqueue_bulk(mp, &obj, 1);
 }
 
 /* call obj_cb() for each mempool element */
@@ -407,17 +405,16 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	else
 		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
 
-	while (off + total_elt_sz <= len && mp->populated_size < mp->size) {
-		off += mp->header_size;
-		if (iova == RTE_BAD_IOVA)
-			mempool_add_elem(mp, (char *)vaddr + off,
-				RTE_BAD_IOVA);
-		else
-			mempool_add_elem(mp, (char *)vaddr + off, iova + off);
-		off += mp->elt_size + mp->trailer_size;
-		i++;
+	if (off > len) {
+		ret = -EINVAL;
+		goto fail;
 	}
 
+	i = rte_mempool_ops_populate(mp, mp->size - mp->populated_size,
+		(char *)vaddr + off,
+		(iova == RTE_BAD_IOVA) ? RTE_BAD_IOVA : (iova + off),
+		len - off, mempool_add_elem, NULL);
+
 	/* not enough room to store one object */
 	if (i == 0) {
 		ret = -EINVAL;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 191255d..754261e 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -456,6 +456,63 @@ ssize_t rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
 		uint32_t obj_num, uint32_t pg_shift,
 		size_t *min_chunk_size, size_t *align);
 
+/**
+ * Function to be called for each populated object.
+ *
+ * @param[in] mp
+ *   A pointer to the mempool structure.
+ * @param[in] opaque
+ *   An opaque pointer passed to iterator.
+ * @param[in] vaddr
+ *   Object virtual address.
+ * @param[in] iova
+ *   Input/output virtual address of the object or RTE_BAD_IOVA.
+ */
+typedef void (rte_mempool_populate_obj_cb_t)(struct rte_mempool *mp,
+		void *opaque, void *vaddr, rte_iova_t iova);
+
+/**
+ * Populate memory pool objects using provided memory chunk.
+ *
+ * Populated objects should be enqueued to the pool, e.g. using
+ * rte_mempool_ops_enqueue_bulk().
+ *
+ * If the given IO address is unknown (iova = RTE_BAD_IOVA),
+ * the chunk doesn't need to be physically contiguous (only virtually),
+ * and allocated objects may span two pages.
+ *
+ * @param[in] mp
+ *   A pointer to the mempool structure.
+ * @param[in] max_objs
+ *   Maximum number of objects to be populated.
+ * @param[in] vaddr
+ *   The virtual address of memory that should be used to store objects.
+ * @param[in] iova
+ *   The IO address
+ * @param[in] len
+ *   The length of memory in bytes.
+ * @param[in] obj_cb
+ *   Callback function to be executed for each populated object.
+ * @param[in] obj_cb_arg
+ *   An opaque pointer passed to the callback function.
+ * @return
+ *   The number of objects added on success.
+ *   On error, no objects are populated and a negative errno is returned.
+ */
+typedef int (*rte_mempool_populate_t)(struct rte_mempool *mp,
+		unsigned int max_objs,
+		void *vaddr, rte_iova_t iova, size_t len,
+		rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg);
+
+/**
+ * Default way to populate memory pool object using provided memory
+ * chunk: just slice objects one by one.
+ */
+int rte_mempool_op_populate_default(struct rte_mempool *mp,
+		unsigned int max_objs,
+		void *vaddr, rte_iova_t iova, size_t len,
+		rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg);
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -477,6 +534,11 @@ struct rte_mempool_ops {
 	 * store specified number of objects.
 	 */
 	rte_mempool_calc_mem_size_t calc_mem_size;
+	/**
+	 * Optional callback to populate mempool objects using
+	 * provided memory chunk.
+	 */
+	rte_mempool_populate_t populate;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -649,6 +711,34 @@ ssize_t rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
 				      size_t *min_chunk_size, size_t *align);
 
 /**
+ * @internal wrapper for mempool_ops populate callback.
+ *
+ * Populate memory pool objects using provided memory chunk.
+ *
+ * @param[in] mp
+ *   A pointer to the mempool structure.
+ * @param[in] max_objs
+ *   Maximum number of objects to be populated.
+ * @param[in] vaddr
+ *   The virtual address of memory that should be used to store objects.
+ * @param[in] iova
+ *   The IO address
+ * @param[in] len
+ *   The length of memory in bytes.
+ * @param[in] obj_cb
+ *   Callback function to be executed for each populated object.
+ * @param[in] obj_cb_arg
+ *   An opaque pointer passed to the callback function.
+ * @return
+ *   The number of objects added on success.
+ *   On error, no objects are populated and a negative errno is returned.
+ */
+int rte_mempool_ops_populate(struct rte_mempool *mp, unsigned int max_objs,
+			     void *vaddr, rte_iova_t iova, size_t len,
+			     rte_mempool_populate_obj_cb_t *obj_cb,
+			     void *obj_cb_arg);
+
+/**
  * @internal wrapper for mempool_ops free callback.
  *
  * @param mp
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 26908cc..1a7f39f 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -60,6 +60,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->get_capabilities = h->get_capabilities;
 	ops->register_memory_area = h->register_memory_area;
 	ops->calc_mem_size = h->calc_mem_size;
+	ops->populate = h->populate;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -141,6 +142,26 @@ rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
 	return ops->calc_mem_size(mp, obj_num, pg_shift, min_chunk_size, align);
 }
 
+/* wrapper to populate memory pool objects using provided memory chunk */
+int
+rte_mempool_ops_populate(struct rte_mempool *mp, unsigned int max_objs,
+				void *vaddr, rte_iova_t iova, size_t len,
+				rte_mempool_populate_obj_cb_t *obj_cb,
+				void *obj_cb_arg)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+
+	if (ops->populate == NULL)
+		return rte_mempool_op_populate_default(mp, max_objs, vaddr,
+						       iova, len, obj_cb,
+						       obj_cb_arg);
+
+	return ops->populate(mp, max_objs, vaddr, iova, len, obj_cb,
+			     obj_cb_arg);
+}
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c
index 57fe79b..57295f7 100644
--- a/lib/librte_mempool/rte_mempool_ops_default.c
+++ b/lib/librte_mempool/rte_mempool_ops_default.c
@@ -36,3 +36,27 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
 
 	return mem_size;
 }
+
+int
+rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int max_objs,
+		void *vaddr, rte_iova_t iova, size_t len,
+		rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
+{
+	size_t total_elt_sz;
+	size_t off;
+	unsigned int i;
+	void *obj;
+
+	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
+
+	for (off = 0, i = 0; off + total_elt_sz <= len && i < max_objs; i++) {
+		off += mp->header_size;
+		obj = (char *)vaddr + off;
+		obj_cb(mp, obj_cb_arg, obj,
+		       (iova == RTE_BAD_IOVA) ? RTE_BAD_IOVA : (iova + off));
+		rte_mempool_ops_enqueue_bulk(mp, &obj, 1);
+		off += mp->elt_size + mp->trailer_size;
+	}
+
+	return i;
+}
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index cb38189..41a0b09 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -56,5 +56,6 @@ DPDK_18.05 {
 	global:
 
 	rte_mempool_op_calc_mem_size_default;
+	rte_mempool_op_populate_default;
 
 } DPDK_17.11;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v3 06/11] mempool: remove callback to get capabilities
  2018-03-26 16:09   ` [PATCH v3 00/11] mempool: prepare to add bucket driver Andrew Rybchenko
                       ` (4 preceding siblings ...)
  2018-03-26 16:09     ` [PATCH v3 05/11] mempool: add op to populate objects using provided memory Andrew Rybchenko
@ 2018-03-26 16:09     ` Andrew Rybchenko
  2018-04-04 15:10       ` santosh
  2018-04-06 15:51       ` Olivier Matz
  2018-03-26 16:09     ` [PATCH v3 07/11] mempool: deprecate xmem functions Andrew Rybchenko
                       ` (4 subsequent siblings)
  10 siblings, 2 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-26 16:09 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Santosh Shukla, Jerin Jacob

The callback was introduced to let generic code to know octeontx
mempool driver requirements to use single physically contiguous
memory chunk to store all objects and align object address to
total object size. Now these requirements are met using a new
callbacks to calculate required memory chunk size and to populate
objects using provided memory chunk.

These capability flags are not used anywhere else.

Restricting capabilities to flags is not generic and likely to
be insufficient to describe mempool driver features. If required
in the future, API which returns structured information may be
added.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
v2 -> v3:
 - none

v1 -> v2:
 - fix typo
 - rebase on top of patch which renames MEMPOOL_F_NO_PHYS_CONTIG

RFCv2 -> v1:
 - squash mempool/octeontx patches to add calc_mem_size and populate
   callbacks to this one in order to avoid breakages in the middle of
   patchset
 - advertise API changes in release notes

 doc/guides/rel_notes/deprecation.rst            |  1 -
 doc/guides/rel_notes/release_18_05.rst          | 11 +++++
 drivers/mempool/octeontx/rte_mempool_octeontx.c | 59 +++++++++++++++++++++----
 lib/librte_mempool/rte_mempool.c                | 44 ++----------------
 lib/librte_mempool/rte_mempool.h                | 52 +---------------------
 lib/librte_mempool/rte_mempool_ops.c            | 14 ------
 lib/librte_mempool/rte_mempool_ops_default.c    | 15 +------
 lib/librte_mempool/rte_mempool_version.map      |  1 -
 8 files changed, 68 insertions(+), 129 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index c06fc67..4deed9a 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -70,7 +70,6 @@ Deprecation Notices
 
   The following changes are planned:
 
-  - removal of ``get_capabilities`` mempool ops and related flags.
   - substitute ``register_memory_area`` with ``populate`` ops.
   - addition of new op to allocate contiguous
     block of objects if underlying driver supports it.
diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index abaefe5..c50f26c 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -66,6 +66,14 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =========================================================
 
+* **Removed mempool capability flags and related functions.**
+
+  Flags ``MEMPOOL_F_CAPA_PHYS_CONTIG`` and
+  ``MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS`` were used by octeontx mempool
+  driver to customize generic mempool library behaviour.
+  Now the new driver callbacks ``calc_mem_size`` and ``populate`` may be
+  used to achieve it without specific knowledge in the generic code.
+
 
 ABI Changes
 -----------
@@ -86,6 +94,9 @@ ABI Changes
   to allow to customize required memory size calculation.
   A new callback ``populate`` has been added to ``rte_mempool_ops``
   to allow to customize objects population.
+  Callback ``get_capabilities`` has been removed from ``rte_mempool_ops``
+  since its features are covered by ``calc_mem_size`` and ``populate``
+  callbacks.
 
 
 Removed Items
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index d143d05..64ed528 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -126,14 +126,29 @@ octeontx_fpavf_get_count(const struct rte_mempool *mp)
 	return octeontx_fpa_bufpool_free_count(pool);
 }
 
-static int
-octeontx_fpavf_get_capabilities(const struct rte_mempool *mp,
-				unsigned int *flags)
+static ssize_t
+octeontx_fpavf_calc_mem_size(const struct rte_mempool *mp,
+			     uint32_t obj_num, uint32_t pg_shift,
+			     size_t *min_chunk_size, size_t *align)
 {
-	RTE_SET_USED(mp);
-	*flags |= (MEMPOOL_F_CAPA_PHYS_CONTIG |
-			MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS);
-	return 0;
+	ssize_t mem_size;
+
+	/*
+	 * Simply need space for one more object to be able to
+	 * fulfil alignment requirements.
+	 */
+	mem_size = rte_mempool_op_calc_mem_size_default(mp, obj_num + 1,
+							pg_shift,
+							min_chunk_size, align);
+	if (mem_size >= 0) {
+		/*
+		 * Memory area which contains objects must be physically
+		 * contiguous.
+		 */
+		*min_chunk_size = mem_size;
+	}
+
+	return mem_size;
 }
 
 static int
@@ -150,6 +165,33 @@ octeontx_fpavf_register_memory_area(const struct rte_mempool *mp,
 	return octeontx_fpavf_pool_set_range(pool_bar, len, vaddr, gpool);
 }
 
+static int
+octeontx_fpavf_populate(struct rte_mempool *mp, unsigned int max_objs,
+			void *vaddr, rte_iova_t iova, size_t len,
+			rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
+{
+	size_t total_elt_sz;
+	size_t off;
+
+	if (iova == RTE_BAD_IOVA)
+		return -EINVAL;
+
+	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
+
+	/* align object start address to a multiple of total_elt_sz */
+	off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
+
+	if (len < off)
+		return -EINVAL;
+
+	vaddr = (char *)vaddr + off;
+	iova += off;
+	len -= off;
+
+	return rte_mempool_op_populate_default(mp, max_objs, vaddr, iova, len,
+					       obj_cb, obj_cb_arg);
+}
+
 static struct rte_mempool_ops octeontx_fpavf_ops = {
 	.name = "octeontx_fpavf",
 	.alloc = octeontx_fpavf_alloc,
@@ -157,8 +199,9 @@ static struct rte_mempool_ops octeontx_fpavf_ops = {
 	.enqueue = octeontx_fpavf_enqueue,
 	.dequeue = octeontx_fpavf_dequeue,
 	.get_count = octeontx_fpavf_get_count,
-	.get_capabilities = octeontx_fpavf_get_capabilities,
 	.register_memory_area = octeontx_fpavf_register_memory_area,
+	.calc_mem_size = octeontx_fpavf_calc_mem_size,
+	.populate = octeontx_fpavf_populate,
 };
 
 MEMPOOL_REGISTER_OPS(octeontx_fpavf_ops);
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index d917dc7..40eedde 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -208,15 +208,9 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  */
 size_t
 rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
-		      unsigned int flags)
+		      __rte_unused unsigned int flags)
 {
 	size_t obj_per_page, pg_num, pg_sz;
-	unsigned int mask;
-
-	mask = MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS | MEMPOOL_F_CAPA_PHYS_CONTIG;
-	if ((flags & mask) == mask)
-		/* alignment need one additional object */
-		elt_num += 1;
 
 	if (total_elt_sz == 0)
 		return 0;
@@ -240,18 +234,12 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
 ssize_t
 rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const rte_iova_t iova[], uint32_t pg_num,
-	uint32_t pg_shift, unsigned int flags)
+	uint32_t pg_shift, __rte_unused unsigned int flags)
 {
 	uint32_t elt_cnt = 0;
 	rte_iova_t start, end;
 	uint32_t iova_idx;
 	size_t pg_sz = (size_t)1 << pg_shift;
-	unsigned int mask;
-
-	mask = MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS | MEMPOOL_F_CAPA_PHYS_CONTIG;
-	if ((flags & mask) == mask)
-		/* alignment need one additional object */
-		elt_num += 1;
 
 	/* if iova is NULL, assume contiguous memory */
 	if (iova == NULL) {
@@ -345,8 +333,6 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
 	void *opaque)
 {
-	unsigned total_elt_sz;
-	unsigned int mp_capa_flags;
 	unsigned i = 0;
 	size_t off;
 	struct rte_mempool_memhdr *memhdr;
@@ -365,27 +351,6 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	if (mp->populated_size >= mp->size)
 		return -ENOSPC;
 
-	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
-
-	/* Get mempool capabilities */
-	mp_capa_flags = 0;
-	ret = rte_mempool_ops_get_capabilities(mp, &mp_capa_flags);
-	if ((ret < 0) && (ret != -ENOTSUP))
-		return ret;
-
-	/* update mempool capabilities */
-	mp->flags |= mp_capa_flags;
-
-	/* Detect pool area has sufficient space for elements */
-	if (mp_capa_flags & MEMPOOL_F_CAPA_PHYS_CONTIG) {
-		if (len < total_elt_sz * mp->size) {
-			RTE_LOG(ERR, MEMPOOL,
-				"pool area %" PRIx64 " not enough\n",
-				(uint64_t)len);
-			return -ENOSPC;
-		}
-	}
-
 	memhdr = rte_zmalloc("MEMPOOL_MEMHDR", sizeof(*memhdr), 0);
 	if (memhdr == NULL)
 		return -ENOMEM;
@@ -397,10 +362,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	memhdr->free_cb = free_cb;
 	memhdr->opaque = opaque;
 
-	if (mp_capa_flags & MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS)
-		/* align object start address to a multiple of total_elt_sz */
-		off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
-	else if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
 		off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
 	else
 		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 754261e..0b83d5e 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -246,24 +246,6 @@ struct rte_mempool {
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 /**< Don't need IOVA contiguous objs. */
 #define MEMPOOL_F_NO_PHYS_CONTIG MEMPOOL_F_NO_IOVA_CONTIG /* deprecated */
-/**
- * This capability flag is advertised by a mempool handler, if the whole
- * memory area containing the objects must be physically contiguous.
- * Note: This flag should not be passed by application.
- */
-#define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040
-/**
- * This capability flag is advertised by a mempool handler. Used for a case
- * where mempool driver wants object start address(vaddr) aligned to block
- * size(/ total element size).
- *
- * Note:
- * - This flag should not be passed by application.
- *   Flag used for mempool driver only.
- * - Mempool driver must also set MEMPOOL_F_CAPA_PHYS_CONTIG flag along with
- *   MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS.
- */
-#define MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS 0x0080
 
 /**
  * @internal When debug is enabled, store some statistics.
@@ -389,12 +371,6 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
 typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
 
 /**
- * Get the mempool capabilities.
- */
-typedef int (*rte_mempool_get_capabilities_t)(const struct rte_mempool *mp,
-		unsigned int *flags);
-
-/**
  * Notify new memory area to mempool.
  */
 typedef int (*rte_mempool_ops_register_memory_area_t)
@@ -440,13 +416,7 @@ typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp,
  * that pages are grouped in subsets of physically continuous pages big
  * enough to store at least one object.
  *
- * If mempool driver requires object addresses to be block size aligned
- * (MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS), space for one extra element is
- * reserved to be able to meet the requirement.
- *
- * Minimum size of memory chunk is either all required space, if
- * capabilities say that whole memory area must be physically contiguous
- * (MEMPOOL_F_CAPA_PHYS_CONTIG), or a maximum of the page size and total
+ * Minimum size of memory chunk is a maximum of the page size and total
  * element size.
  *
  * Required memory chunk alignment is a maximum of page size and cache
@@ -522,10 +492,6 @@ struct rte_mempool_ops {
 	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
 	rte_mempool_get_count get_count; /**< Get qty of available objs. */
 	/**
-	 * Get the mempool capabilities
-	 */
-	rte_mempool_get_capabilities_t get_capabilities;
-	/**
 	 * Notify new memory area to mempool
 	 */
 	rte_mempool_ops_register_memory_area_t register_memory_area;
@@ -651,22 +617,6 @@ unsigned
 rte_mempool_ops_get_count(const struct rte_mempool *mp);
 
 /**
- * @internal wrapper for mempool_ops get_capabilities callback.
- *
- * @param mp [in]
- *   Pointer to the memory pool.
- * @param flags [out]
- *   Pointer to the mempool flags.
- * @return
- *   - 0: Success; The mempool driver has advertised his pool capabilities in
- *   flags param.
- *   - -ENOTSUP - doesn't support get_capabilities ops (valid case).
- *   - Otherwise, pool create fails.
- */
-int
-rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
-					unsigned int *flags);
-/**
  * @internal wrapper for mempool_ops register_memory_area callback.
  * API to notify the mempool handler when a new memory area is added to pool.
  *
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 1a7f39f..6ac669a 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -57,7 +57,6 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->enqueue = h->enqueue;
 	ops->dequeue = h->dequeue;
 	ops->get_count = h->get_count;
-	ops->get_capabilities = h->get_capabilities;
 	ops->register_memory_area = h->register_memory_area;
 	ops->calc_mem_size = h->calc_mem_size;
 	ops->populate = h->populate;
@@ -99,19 +98,6 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp)
 	return ops->get_count(mp);
 }
 
-/* wrapper to get external mempool capabilities. */
-int
-rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
-					unsigned int *flags)
-{
-	struct rte_mempool_ops *ops;
-
-	ops = rte_mempool_get_ops(mp->ops_index);
-
-	RTE_FUNC_PTR_OR_ERR_RET(ops->get_capabilities, -ENOTSUP);
-	return ops->get_capabilities(mp, flags);
-}
-
 /* wrapper to notify new memory area to external mempool */
 int
 rte_mempool_ops_register_memory_area(const struct rte_mempool *mp, char *vaddr,
diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c
index 57295f7..3defc15 100644
--- a/lib/librte_mempool/rte_mempool_ops_default.c
+++ b/lib/librte_mempool/rte_mempool_ops_default.c
@@ -11,26 +11,15 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
 				     uint32_t obj_num, uint32_t pg_shift,
 				     size_t *min_chunk_size, size_t *align)
 {
-	unsigned int mp_flags;
-	int ret;
 	size_t total_elt_sz;
 	size_t mem_size;
 
-	/* Get mempool capabilities */
-	mp_flags = 0;
-	ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
-	if ((ret < 0) && (ret != -ENOTSUP))
-		return ret;
-
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 
 	mem_size = rte_mempool_xmem_size(obj_num, total_elt_sz, pg_shift,
-					 mp->flags | mp_flags);
+					 mp->flags);
 
-	if (mp_flags & MEMPOOL_F_CAPA_PHYS_CONTIG)
-		*min_chunk_size = mem_size;
-	else
-		*min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz);
+	*min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz);
 
 	*align = RTE_MAX((size_t)RTE_CACHE_LINE_SIZE, (size_t)1 << pg_shift);
 
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 41a0b09..637f73f 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -45,7 +45,6 @@ DPDK_16.07 {
 DPDK_17.11 {
 	global:
 
-	rte_mempool_ops_get_capabilities;
 	rte_mempool_ops_register_memory_area;
 	rte_mempool_populate_iova;
 	rte_mempool_populate_iova_tab;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v3 07/11] mempool: deprecate xmem functions
  2018-03-26 16:09   ` [PATCH v3 00/11] mempool: prepare to add bucket driver Andrew Rybchenko
                       ` (5 preceding siblings ...)
  2018-03-26 16:09     ` [PATCH v3 06/11] mempool: remove callback to get capabilities Andrew Rybchenko
@ 2018-03-26 16:09     ` Andrew Rybchenko
  2018-04-06 15:52       ` Olivier Matz
  2018-03-26 16:09     ` [PATCH v3 08/11] mempool/octeontx: prepare to remove register memory area op Andrew Rybchenko
                       ` (3 subsequent siblings)
  10 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-26 16:09 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Thomas Monjalon

Move rte_mempool_xmem_size() code to internal helper function
since it is required in two places: deprecated rte_mempool_xmem_size()
and non-deprecated rte_mempool_op_calc_mem_size_default().

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
v2 -> v3:
 - none

v1 -> v2:
 - deprecate rte_mempool_populate_iova_tab()
 - add -Wno-deprecated-declarations to fix build errors because of
   rte_mempool_populate_iova_tab() deprecation
 - add @deprecated to deprecated functions description

RFCv2 -> v1:
 - advertise deprecation in release notes
 - factor out default memory size calculation into non-deprecated
   internal function to avoid usage of deprecated function internally
 - remove test for deprecated functions to address build issue because
   of usage of deprecated functions (it is easy to allow usage of
   deprecated function in Makefile, but very complicated in meson)

 doc/guides/rel_notes/deprecation.rst         |  7 -------
 doc/guides/rel_notes/release_18_05.rst       | 11 ++++++++++
 lib/librte_mempool/Makefile                  |  3 +++
 lib/librte_mempool/meson.build               | 12 +++++++++++
 lib/librte_mempool/rte_mempool.c             | 19 ++++++++++++++---
 lib/librte_mempool/rte_mempool.h             | 30 +++++++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops_default.c |  4 ++--
 test/test/test_mempool.c                     | 31 ----------------------------
 8 files changed, 74 insertions(+), 43 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 4deed9a..473330d 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -60,13 +60,6 @@ Deprecation Notices
   - ``rte_eal_mbuf_default_mempool_ops``
 
 * mempool: several API and ABI changes are planned in v18.05.
-  The following functions, introduced for Xen, which is not supported
-  anymore since v17.11, are hard to use, not used anywhere else in DPDK.
-  Therefore they will be deprecated in v18.05 and removed in v18.08:
-
-  - ``rte_mempool_xmem_create``
-  - ``rte_mempool_xmem_size``
-  - ``rte_mempool_xmem_usage``
 
   The following changes are planned:
 
diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index c50f26c..6a8db54 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -74,6 +74,17 @@ API Changes
   Now the new driver callbacks ``calc_mem_size`` and ``populate`` may be
   used to achieve it without specific knowledge in the generic code.
 
+* **Deprecated mempool xmem functions.**
+
+  The following functions, introduced for Xen, which is not supported
+  anymore since v17.11, are hard to use, not used anywhere else in DPDK.
+  Therefore they were deprecated in v18.05 and will be removed in v18.08:
+
+  - ``rte_mempool_xmem_create``
+  - ``rte_mempool_xmem_size``
+  - ``rte_mempool_xmem_usage``
+  - ``rte_mempool_populate_iova_tab``
+
 
 ABI Changes
 -----------
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 072740f..2c46fdd 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -7,6 +7,9 @@ include $(RTE_SDK)/mk/rte.vars.mk
 LIB = librte_mempool.a
 
 CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+# Allow deprecated symbol to use deprecated rte_mempool_populate_iova_tab()
+# from earlier deprecated rte_mempool_populate_phys_tab()
+CFLAGS += -Wno-deprecated-declarations
 LDLIBS += -lrte_eal -lrte_ring
 
 EXPORT_MAP := rte_mempool_version.map
diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build
index 9e3b527..22e912a 100644
--- a/lib/librte_mempool/meson.build
+++ b/lib/librte_mempool/meson.build
@@ -1,6 +1,18 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2017 Intel Corporation
 
+extra_flags = []
+
+# Allow deprecated symbol to use deprecated rte_mempool_populate_iova_tab()
+# from earlier deprecated rte_mempool_populate_phys_tab()
+extra_flags += '-Wno-deprecated-declarations'
+
+foreach flag: extra_flags
+	if cc.has_argument(flag)
+		cflags += flag
+	endif
+endforeach
+
 version = 4
 sources = files('rte_mempool.c', 'rte_mempool_ops.c',
 		'rte_mempool_ops_default.c')
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 40eedde..8c3b0b1 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -204,11 +204,13 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 
 
 /*
- * Calculate maximum amount of memory required to store given number of objects.
+ * Internal function to calculate required memory chunk size shared
+ * by default implementation of the corresponding callback and
+ * deprecated external function.
  */
 size_t
-rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
-		      __rte_unused unsigned int flags)
+rte_mempool_calc_mem_size_helper(uint32_t elt_num, size_t total_elt_sz,
+				 uint32_t pg_shift)
 {
 	size_t obj_per_page, pg_num, pg_sz;
 
@@ -228,6 +230,17 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
 }
 
 /*
+ * Calculate maximum amount of memory required to store given number of objects.
+ */
+size_t
+rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
+		      __rte_unused unsigned int flags)
+{
+	return rte_mempool_calc_mem_size_helper(elt_num, total_elt_sz,
+						pg_shift);
+}
+
+/*
  * Calculate how much memory would be actually required with the
  * given memory footprint to store required number of elements.
  */
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 0b83d5e..9107f5a 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -427,6 +427,28 @@ ssize_t rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
 		size_t *min_chunk_size, size_t *align);
 
 /**
+ * @internal Helper function to calculate memory size required to store
+ * specified number of objects in assumption that the memory buffer will
+ * be aligned at page boundary.
+ *
+ * Note that if object size is bigger than page size, then it assumes
+ * that pages are grouped in subsets of physically continuous pages big
+ * enough to store at least one object.
+ *
+ * @param elt_num
+ *   Number of elements.
+ * @param total_elt_sz
+ *   The size of each element, including header and trailer, as returned
+ *   by rte_mempool_calc_obj_size().
+ * @param pg_shift
+ *   LOG2 of the physical pages size. If set to 0, ignore page boundaries.
+ * @return
+ *   Required memory size aligned at page boundary.
+ */
+size_t rte_mempool_calc_mem_size_helper(uint32_t elt_num, size_t total_elt_sz,
+		uint32_t pg_shift);
+
+/**
  * Function to be called for each populated object.
  *
  * @param[in] mp
@@ -855,6 +877,7 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
 		   int socket_id, unsigned flags);
 
 /**
+ * @deprecated
  * Create a new mempool named *name* in memory.
  *
  * The pool contains n elements of elt_size. Its size is set to n.
@@ -912,6 +935,7 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
  *   The pointer to the new allocated mempool, on success. NULL on error
  *   with rte_errno set appropriately. See rte_mempool_create() for details.
  */
+__rte_deprecated
 struct rte_mempool *
 rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 		unsigned cache_size, unsigned private_data_size,
@@ -1008,6 +1032,7 @@ int rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	void *opaque);
 
 /**
+ * @deprecated
  * Add physical memory for objects in the pool at init
  *
  * Add a virtually contiguous memory chunk in the pool where objects can
@@ -1033,6 +1058,7 @@ int rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
  *   On error, the chunks are not added in the memory list of the
  *   mempool and a negative errno is returned.
  */
+__rte_deprecated
 int rte_mempool_populate_iova_tab(struct rte_mempool *mp, char *vaddr,
 	const rte_iova_t iova[], uint32_t pg_num, uint32_t pg_shift,
 	rte_mempool_memchunk_free_cb_t *free_cb, void *opaque);
@@ -1652,6 +1678,7 @@ uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 	struct rte_mempool_objsz *sz);
 
 /**
+ * @deprecated
  * Get the size of memory required to store mempool elements.
  *
  * Calculate the maximum amount of memory required to store given number
@@ -1674,10 +1701,12 @@ uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  * @return
  *   Required memory size aligned at page boundary.
  */
+__rte_deprecated
 size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
 	uint32_t pg_shift, unsigned int flags);
 
 /**
+ * @deprecated
  * Get the size of memory required to store mempool elements.
  *
  * Calculate how much memory would be actually required with the given
@@ -1705,6 +1734,7 @@ size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
  *   buffer is too small, return a negative value whose absolute value
  *   is the actual number of elements that can be stored in that buffer.
  */
+__rte_deprecated
 ssize_t rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const rte_iova_t iova[], uint32_t pg_num,
 	uint32_t pg_shift, unsigned int flags);
diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c
index 3defc15..fd63ca1 100644
--- a/lib/librte_mempool/rte_mempool_ops_default.c
+++ b/lib/librte_mempool/rte_mempool_ops_default.c
@@ -16,8 +16,8 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
 
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 
-	mem_size = rte_mempool_xmem_size(obj_num, total_elt_sz, pg_shift,
-					 mp->flags);
+	mem_size = rte_mempool_calc_mem_size_helper(obj_num, total_elt_sz,
+						    pg_shift);
 
 	*min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz);
 
diff --git a/test/test/test_mempool.c b/test/test/test_mempool.c
index 63f921e..8d29af2 100644
--- a/test/test/test_mempool.c
+++ b/test/test/test_mempool.c
@@ -444,34 +444,6 @@ test_mempool_same_name_twice_creation(void)
 	return 0;
 }
 
-/*
- * Basic test for mempool_xmem functions.
- */
-static int
-test_mempool_xmem_misc(void)
-{
-	uint32_t elt_num, total_size;
-	size_t sz;
-	ssize_t usz;
-
-	elt_num = MAX_KEEP;
-	total_size = rte_mempool_calc_obj_size(MEMPOOL_ELT_SIZE, 0, NULL);
-	sz = rte_mempool_xmem_size(elt_num, total_size, MEMPOOL_PG_SHIFT_MAX,
-					0);
-
-	usz = rte_mempool_xmem_usage(NULL, elt_num, total_size, 0, 1,
-		MEMPOOL_PG_SHIFT_MAX, 0);
-
-	if (sz != (size_t)usz)  {
-		printf("failure @ %s: rte_mempool_xmem_usage(%u, %u) "
-			"returns: %#zx, while expected: %#zx;\n",
-			__func__, elt_num, total_size, sz, (size_t)usz);
-		return -1;
-	}
-
-	return 0;
-}
-
 static void
 walk_cb(struct rte_mempool *mp, void *userdata __rte_unused)
 {
@@ -596,9 +568,6 @@ test_mempool(void)
 	if (test_mempool_same_name_twice_creation() < 0)
 		goto err;
 
-	if (test_mempool_xmem_misc() < 0)
-		goto err;
-
 	/* test the stack handler */
 	if (test_mempool_basic(mp_stack, 1) < 0)
 		goto err;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v3 08/11] mempool/octeontx: prepare to remove register memory area op
  2018-03-26 16:09   ` [PATCH v3 00/11] mempool: prepare to add bucket driver Andrew Rybchenko
                       ` (6 preceding siblings ...)
  2018-03-26 16:09     ` [PATCH v3 07/11] mempool: deprecate xmem functions Andrew Rybchenko
@ 2018-03-26 16:09     ` Andrew Rybchenko
  2018-04-04 15:12       ` santosh
  2018-03-26 16:09     ` [PATCH v3 09/11] mempool/dpaa: " Andrew Rybchenko
                       ` (2 subsequent siblings)
  10 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-26 16:09 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Santosh Shukla, Jerin Jacob

Callback to populate pool objects has all required information and
executed a bit later than register memory area callback.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
v2 -> v3:
 - none

v1 -> v2
 - none

 drivers/mempool/octeontx/rte_mempool_octeontx.c | 25 ++++++++++---------------
 1 file changed, 10 insertions(+), 15 deletions(-)

diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index 64ed528..ab94dfe 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -152,26 +152,15 @@ octeontx_fpavf_calc_mem_size(const struct rte_mempool *mp,
 }
 
 static int
-octeontx_fpavf_register_memory_area(const struct rte_mempool *mp,
-				    char *vaddr, rte_iova_t paddr, size_t len)
-{
-	RTE_SET_USED(paddr);
-	uint8_t gpool;
-	uintptr_t pool_bar;
-
-	gpool = octeontx_fpa_bufpool_gpool(mp->pool_id);
-	pool_bar = mp->pool_id & ~(uint64_t)FPA_GPOOL_MASK;
-
-	return octeontx_fpavf_pool_set_range(pool_bar, len, vaddr, gpool);
-}
-
-static int
 octeontx_fpavf_populate(struct rte_mempool *mp, unsigned int max_objs,
 			void *vaddr, rte_iova_t iova, size_t len,
 			rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
 {
 	size_t total_elt_sz;
 	size_t off;
+	uint8_t gpool;
+	uintptr_t pool_bar;
+	int ret;
 
 	if (iova == RTE_BAD_IOVA)
 		return -EINVAL;
@@ -188,6 +177,13 @@ octeontx_fpavf_populate(struct rte_mempool *mp, unsigned int max_objs,
 	iova += off;
 	len -= off;
 
+	gpool = octeontx_fpa_bufpool_gpool(mp->pool_id);
+	pool_bar = mp->pool_id & ~(uint64_t)FPA_GPOOL_MASK;
+
+	ret = octeontx_fpavf_pool_set_range(pool_bar, len, vaddr, gpool);
+	if (ret < 0)
+		return ret;
+
 	return rte_mempool_op_populate_default(mp, max_objs, vaddr, iova, len,
 					       obj_cb, obj_cb_arg);
 }
@@ -199,7 +195,6 @@ static struct rte_mempool_ops octeontx_fpavf_ops = {
 	.enqueue = octeontx_fpavf_enqueue,
 	.dequeue = octeontx_fpavf_dequeue,
 	.get_count = octeontx_fpavf_get_count,
-	.register_memory_area = octeontx_fpavf_register_memory_area,
 	.calc_mem_size = octeontx_fpavf_calc_mem_size,
 	.populate = octeontx_fpavf_populate,
 };
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v3 09/11] mempool/dpaa: prepare to remove register memory area op
  2018-03-26 16:09   ` [PATCH v3 00/11] mempool: prepare to add bucket driver Andrew Rybchenko
                       ` (7 preceding siblings ...)
  2018-03-26 16:09     ` [PATCH v3 08/11] mempool/octeontx: prepare to remove register memory area op Andrew Rybchenko
@ 2018-03-26 16:09     ` Andrew Rybchenko
  2018-04-05  8:25       ` Hemant Agrawal
  2018-03-26 16:09     ` [PATCH v3 10/11] mempool: remove callback to register memory area Andrew Rybchenko
  2018-03-26 16:09     ` [PATCH v3 11/11] mempool: support flushing the default cache of the mempool Andrew Rybchenko
  10 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-26 16:09 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Hemant Agrawal, Shreyansh Jain

Populate mempool driver callback is executed a bit later than
register memory area, provides the same information and will
substitute the later since it gives more flexibility and in addition
to notification about memory area allows to customize how mempool
objects are stored in memory.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
v2 -> v3:
 - fix build error because of prototype mismatch (char * -> void *)

v1 -> v2:
 - fix build error because of prototype mismatch

 drivers/mempool/dpaa/dpaa_mempool.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c
index 7b82f4b..580e464 100644
--- a/drivers/mempool/dpaa/dpaa_mempool.c
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -264,10 +264,9 @@ dpaa_mbuf_get_count(const struct rte_mempool *mp)
 }
 
 static int
-dpaa_register_memory_area(const struct rte_mempool *mp,
-			  char *vaddr __rte_unused,
-			  rte_iova_t paddr __rte_unused,
-			  size_t len)
+dpaa_populate(struct rte_mempool *mp, unsigned int max_objs,
+	      void *vaddr, rte_iova_t paddr, size_t len,
+	      rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
 {
 	struct dpaa_bp_info *bp_info;
 	unsigned int total_elt_sz;
@@ -289,7 +288,9 @@ dpaa_register_memory_area(const struct rte_mempool *mp,
 	if (len >= total_elt_sz * mp->size)
 		bp_info->flags |= DPAA_MPOOL_SINGLE_SEGMENT;
 
-	return 0;
+	return rte_mempool_op_populate_default(mp, max_objs, vaddr, paddr, len,
+					       obj_cb, obj_cb_arg);
+
 }
 
 struct rte_mempool_ops dpaa_mpool_ops = {
@@ -299,7 +300,7 @@ struct rte_mempool_ops dpaa_mpool_ops = {
 	.enqueue = dpaa_mbuf_free_bulk,
 	.dequeue = dpaa_mbuf_alloc_bulk,
 	.get_count = dpaa_mbuf_get_count,
-	.register_memory_area = dpaa_register_memory_area,
+	.populate = dpaa_populate,
 };
 
 MEMPOOL_REGISTER_OPS(dpaa_mpool_ops);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v3 10/11] mempool: remove callback to register memory area
  2018-03-26 16:09   ` [PATCH v3 00/11] mempool: prepare to add bucket driver Andrew Rybchenko
                       ` (8 preceding siblings ...)
  2018-03-26 16:09     ` [PATCH v3 09/11] mempool/dpaa: " Andrew Rybchenko
@ 2018-03-26 16:09     ` Andrew Rybchenko
  2018-04-04 15:13       ` santosh
  2018-04-06 15:52       ` Olivier Matz
  2018-03-26 16:09     ` [PATCH v3 11/11] mempool: support flushing the default cache of the mempool Andrew Rybchenko
  10 siblings, 2 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-26 16:09 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ

The callback is not required any more since there is a new callback
to populate objects using provided memory area which provides
the same information.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
v2 -> v3:
 - none

v1 -> v2:
 - none

RFCv2 -> v1:
 - advertise ABI changes in release notes

 doc/guides/rel_notes/deprecation.rst       |  1 -
 doc/guides/rel_notes/release_18_05.rst     |  2 ++
 lib/librte_mempool/rte_mempool.c           |  5 -----
 lib/librte_mempool/rte_mempool.h           | 31 ------------------------------
 lib/librte_mempool/rte_mempool_ops.c       | 14 --------------
 lib/librte_mempool/rte_mempool_version.map |  1 -
 6 files changed, 2 insertions(+), 52 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 473330d..5301259 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -63,7 +63,6 @@ Deprecation Notices
 
   The following changes are planned:
 
-  - substitute ``register_memory_area`` with ``populate`` ops.
   - addition of new op to allocate contiguous
     block of objects if underlying driver supports it.
 
diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index 6a8db54..016c4ed 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -108,6 +108,8 @@ ABI Changes
   Callback ``get_capabilities`` has been removed from ``rte_mempool_ops``
   since its features are covered by ``calc_mem_size`` and ``populate``
   callbacks.
+  Callback ``register_memory_area`` has been removed from ``rte_mempool_ops``
+  since the new callback ``populate`` may be used instead of it.
 
 
 Removed Items
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 8c3b0b1..c58bcc6 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -355,11 +355,6 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	if (ret != 0)
 		return ret;
 
-	/* Notify memory area to mempool */
-	ret = rte_mempool_ops_register_memory_area(mp, vaddr, iova, len);
-	if (ret != -ENOTSUP && ret < 0)
-		return ret;
-
 	/* mempool is already populated */
 	if (mp->populated_size >= mp->size)
 		return -ENOSPC;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 9107f5a..314f909 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -371,12 +371,6 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
 typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
 
 /**
- * Notify new memory area to mempool.
- */
-typedef int (*rte_mempool_ops_register_memory_area_t)
-(const struct rte_mempool *mp, char *vaddr, rte_iova_t iova, size_t len);
-
-/**
  * Calculate memory size required to store given number of objects.
  *
  * If mempool objects are not required to be IOVA-contiguous
@@ -514,10 +508,6 @@ struct rte_mempool_ops {
 	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
 	rte_mempool_get_count get_count; /**< Get qty of available objs. */
 	/**
-	 * Notify new memory area to mempool
-	 */
-	rte_mempool_ops_register_memory_area_t register_memory_area;
-	/**
 	 * Optional callback to calculate memory size required to
 	 * store specified number of objects.
 	 */
@@ -639,27 +629,6 @@ unsigned
 rte_mempool_ops_get_count(const struct rte_mempool *mp);
 
 /**
- * @internal wrapper for mempool_ops register_memory_area callback.
- * API to notify the mempool handler when a new memory area is added to pool.
- *
- * @param mp
- *   Pointer to the memory pool.
- * @param vaddr
- *   Pointer to the buffer virtual address.
- * @param iova
- *   Pointer to the buffer IO address.
- * @param len
- *   Pool size.
- * @return
- *   - 0: Success;
- *   - -ENOTSUP - doesn't support register_memory_area ops (valid error case).
- *   - Otherwise, rte_mempool_populate_phys fails thus pool create fails.
- */
-int
-rte_mempool_ops_register_memory_area(const struct rte_mempool *mp,
-				char *vaddr, rte_iova_t iova, size_t len);
-
-/**
  * @internal wrapper for mempool_ops calc_mem_size callback.
  * API to calculate size of memory required to store specified number of
  * object.
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 6ac669a..ea9be1e 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -57,7 +57,6 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->enqueue = h->enqueue;
 	ops->dequeue = h->dequeue;
 	ops->get_count = h->get_count;
-	ops->register_memory_area = h->register_memory_area;
 	ops->calc_mem_size = h->calc_mem_size;
 	ops->populate = h->populate;
 
@@ -99,19 +98,6 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp)
 }
 
 /* wrapper to notify new memory area to external mempool */
-int
-rte_mempool_ops_register_memory_area(const struct rte_mempool *mp, char *vaddr,
-					rte_iova_t iova, size_t len)
-{
-	struct rte_mempool_ops *ops;
-
-	ops = rte_mempool_get_ops(mp->ops_index);
-
-	RTE_FUNC_PTR_OR_ERR_RET(ops->register_memory_area, -ENOTSUP);
-	return ops->register_memory_area(mp, vaddr, iova, len);
-}
-
-/* wrapper to notify new memory area to external mempool */
 ssize_t
 rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
 				uint32_t obj_num, uint32_t pg_shift,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 637f73f..cf375db 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -45,7 +45,6 @@ DPDK_16.07 {
 DPDK_17.11 {
 	global:
 
-	rte_mempool_ops_register_memory_area;
 	rte_mempool_populate_iova;
 	rte_mempool_populate_iova_tab;
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v3 11/11] mempool: support flushing the default cache of the mempool
  2018-03-26 16:09   ` [PATCH v3 00/11] mempool: prepare to add bucket driver Andrew Rybchenko
                       ` (9 preceding siblings ...)
  2018-03-26 16:09     ` [PATCH v3 10/11] mempool: remove callback to register memory area Andrew Rybchenko
@ 2018-03-26 16:09     ` Andrew Rybchenko
  2018-04-06 15:53       ` Olivier Matz
  10 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-26 16:09 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Mempool get/put API cares about cache itself, but sometimes it is
required to flush the cache explicitly.

The function is moved in the file since it now requires
rte_mempool_default_cache().

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
v2 -> v3:
 - none

v1 -> v2:
 - none

 lib/librte_mempool/rte_mempool.h | 36 ++++++++++++++++++++----------------
 1 file changed, 20 insertions(+), 16 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 314f909..3e06ae0 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -1169,22 +1169,6 @@ void
 rte_mempool_cache_free(struct rte_mempool_cache *cache);
 
 /**
- * Flush a user-owned mempool cache to the specified mempool.
- *
- * @param cache
- *   A pointer to the mempool cache.
- * @param mp
- *   A pointer to the mempool.
- */
-static __rte_always_inline void
-rte_mempool_cache_flush(struct rte_mempool_cache *cache,
-			struct rte_mempool *mp)
-{
-	rte_mempool_ops_enqueue_bulk(mp, cache->objs, cache->len);
-	cache->len = 0;
-}
-
-/**
  * Get a pointer to the per-lcore default mempool cache.
  *
  * @param mp
@@ -1207,6 +1191,26 @@ rte_mempool_default_cache(struct rte_mempool *mp, unsigned lcore_id)
 }
 
 /**
+ * Flush a user-owned mempool cache to the specified mempool.
+ *
+ * @param cache
+ *   A pointer to the mempool cache.
+ * @param mp
+ *   A pointer to the mempool.
+ */
+static __rte_always_inline void
+rte_mempool_cache_flush(struct rte_mempool_cache *cache,
+			struct rte_mempool *mp)
+{
+	if (cache == NULL)
+		cache = rte_mempool_default_cache(mp, rte_lcore_id());
+	if (cache == NULL || cache->len == 0)
+		return;
+	rte_mempool_ops_enqueue_bulk(mp, cache->objs, cache->len);
+	cache->len = 0;
+}
+
+/**
  * @internal Put several objects back in the mempool; used internally.
  * @param mp
  *   A pointer to the mempool structure.
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v1 0/6] mempool: add bucket driver
  2018-01-23 13:15 ` [RFC v2 00/17] " Andrew Rybchenko
                     ` (20 preceding siblings ...)
  2018-03-26 16:09   ` [PATCH v3 00/11] mempool: prepare to add bucket driver Andrew Rybchenko
@ 2018-03-26 16:12   ` Andrew Rybchenko
  2018-03-26 16:12     ` [PATCH v1 1/6] mempool/bucket: implement bucket mempool manager Andrew Rybchenko
                       ` (6 more replies)
  21 siblings, 7 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-26 16:12 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ

The initial patch series [1] (RFCv1 is [2]) is split into two to simplify
processing.  It is the second part which relies on the first one [3].

It should be applied on top of [4] and [3].

The patch series adds bucket mempool driver which allows to allocate
(both physically and virtually) contiguous blocks of objects and adds
mempool API to do it. It is still capable to provide separate objects,
but it is definitely more heavy-weight than ring/stack drivers.
The driver will be used by the future Solarflare driver enhancements
which allow to utilize physical contiguous blocks in the NIC firmware.

The target usecase is dequeue in blocks and enqueue separate objects
back (which are collected in buckets to be dequeued). So, the memory
pool with bucket driver is created by an application and provided to
networking PMD receive queue. The choice of bucket driver is done using
rte_eth_dev_pool_ops_supported(). A PMD that relies upon contiguous
block allocation should report the bucket driver as the only supported
and preferred one.

Introduction of the contiguous block dequeue operation is proven by
performance measurements using autotest with minor enhancements:
 - in the original test bulks are powers of two, which is unacceptable
   for us, so they are changed to multiple of contig_block_size;
 - the test code is duplicated to support plain dequeue and
   dequeue_contig_blocks;
 - all the extra test variations (with/without cache etc) are eliminated;
 - a fake read from the dequeued buffer is added (in both cases) to
   simulate mbufs access.

start performance test for bucket (without cache)
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Srate_persec=   111935488
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Srate_persec=   115290931
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Srate_persec=   353055539
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Srate_persec=   353330790
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Srate_persec=   224657407
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Srate_persec=   230411468
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Srate_persec=   706700902
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Srate_persec=   703673139
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Srate_persec=   425236887
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Srate_persec=   437295512
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Srate_persec=  1343409356
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Srate_persec=  1336567397
start performance test for bucket (without cache + contiguous dequeue)
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Crate_persec=   122945536
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Crate_persec=   126458265
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Crate_persec=   374262988
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Crate_persec=   377316966
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Crate_persec=   244842496
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Crate_persec=   251618917
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Crate_persec=   751226060
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Crate_persec=   756233010
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Crate_persec=   462068120
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Crate_persec=   476997221
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Crate_persec=  1432171313
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Crate_persec=  1438829771

The number of objects in the contiguous block is a function of bucket
memory size (.config option) and total element size. In the future
additional API with possibility to pass parameters on mempool allocation
may be added.

It breaks ABI since changes rte_mempool_ops. The ABI version is already
bumped in [4].


[1] https://dpdk.org/ml/archives/dev/2018-January/088698.html
[2] https://dpdk.org/ml/archives/dev/2017-November/082335.html
[3] https://dpdk.org/ml/archives/dev/2018-March/093807.html
[4] https://dpdk.org/ml/archives/dev/2018-March/093196.html


RFCv2 -> v1:
  - rebased on top of [3]
  - cleanup deprecation notice when it is done
  - mark a new API experimental
  - move contig blocks dequeue debug checks/processing to the library function
  - add contig blocks get stats
  - add release notes

RFCv1 -> RFCv2:
  - change info API to get information from driver required to
    API user to know contiguous block size
  - use SPDX tags
  - avoid all objects affinity to single lcore
  - fix bucket get_count
  - fix NO_CACHE_ALIGN case in bucket mempool


Andrew Rybchenko (1):
  doc: advertise bucket mempool driver

Artem V. Andreev (5):
  mempool/bucket: implement bucket mempool manager
  mempool: implement abstract mempool info API
  mempool: support block dequeue operation
  mempool/bucket: implement block dequeue operation
  mempool/bucket: do not allow one lcore to grab all buckets

 MAINTAINERS                                        |   9 +
 config/common_base                                 |   2 +
 doc/guides/rel_notes/deprecation.rst               |   7 -
 doc/guides/rel_notes/release_18_05.rst             |   9 +
 drivers/mempool/Makefile                           |   1 +
 drivers/mempool/bucket/Makefile                    |  27 +
 drivers/mempool/bucket/meson.build                 |   9 +
 drivers/mempool/bucket/rte_mempool_bucket.c        | 627 +++++++++++++++++++++
 .../mempool/bucket/rte_mempool_bucket_version.map  |   4 +
 lib/librte_mempool/Makefile                        |   1 +
 lib/librte_mempool/meson.build                     |   2 +
 lib/librte_mempool/rte_mempool.c                   |  39 ++
 lib/librte_mempool/rte_mempool.h                   | 190 +++++++
 lib/librte_mempool/rte_mempool_ops.c               |  16 +
 lib/librte_mempool/rte_mempool_version.map         |   8 +
 mk/rte.app.mk                                      |   1 +
 16 files changed, 945 insertions(+), 7 deletions(-)
 create mode 100644 drivers/mempool/bucket/Makefile
 create mode 100644 drivers/mempool/bucket/meson.build
 create mode 100644 drivers/mempool/bucket/rte_mempool_bucket.c
 create mode 100644 drivers/mempool/bucket/rte_mempool_bucket_version.map

-- 
2.7.4

^ permalink raw reply	[flat|nested] 197+ messages in thread

* [PATCH v1 1/6] mempool/bucket: implement bucket mempool manager
  2018-03-26 16:12   ` [PATCH v1 0/6] mempool: add bucket driver Andrew Rybchenko
@ 2018-03-26 16:12     ` Andrew Rybchenko
  2018-03-26 16:12     ` [PATCH v1 2/6] mempool: implement abstract mempool info API Andrew Rybchenko
                       ` (5 subsequent siblings)
  6 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-26 16:12 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

The manager provides a way to allocate physically and virtually
contiguous set of objects.

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 MAINTAINERS                                        |   9 +
 config/common_base                                 |   2 +
 drivers/mempool/Makefile                           |   1 +
 drivers/mempool/bucket/Makefile                    |  27 +
 drivers/mempool/bucket/meson.build                 |   9 +
 drivers/mempool/bucket/rte_mempool_bucket.c        | 562 +++++++++++++++++++++
 .../mempool/bucket/rte_mempool_bucket_version.map  |   4 +
 mk/rte.app.mk                                      |   1 +
 8 files changed, 615 insertions(+)
 create mode 100644 drivers/mempool/bucket/Makefile
 create mode 100644 drivers/mempool/bucket/meson.build
 create mode 100644 drivers/mempool/bucket/rte_mempool_bucket.c
 create mode 100644 drivers/mempool/bucket/rte_mempool_bucket_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index aa30bd9..db903b3 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -326,6 +326,15 @@ F: test/test/test_rawdev.c
 F: doc/guides/prog_guide/rawdev.rst
 
 
+Memory Pool Drivers
+-------------------
+
+Bucket memory pool
+M: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
+M: Andrew Rybchenko <arybchenko@solarflare.com>
+F: drivers/mempool/bucket/
+
+
 Bus Drivers
 -----------
 
diff --git a/config/common_base b/config/common_base
index ee10b44..dd6d420 100644
--- a/config/common_base
+++ b/config/common_base
@@ -606,6 +606,8 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
 #
 # Compile Mempool drivers
 #
+CONFIG_RTE_DRIVER_MEMPOOL_BUCKET=y
+CONFIG_RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB=64
 CONFIG_RTE_DRIVER_MEMPOOL_RING=y
 CONFIG_RTE_DRIVER_MEMPOOL_STACK=y
 
diff --git a/drivers/mempool/Makefile b/drivers/mempool/Makefile
index fc8b73b..28c2e83 100644
--- a/drivers/mempool/Makefile
+++ b/drivers/mempool/Makefile
@@ -3,6 +3,7 @@
 
 include $(RTE_SDK)/mk/rte.vars.mk
 
+DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_BUCKET) += bucket
 ifeq ($(CONFIG_RTE_LIBRTE_DPAA_BUS),y)
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL) += dpaa
 endif
diff --git a/drivers/mempool/bucket/Makefile b/drivers/mempool/bucket/Makefile
new file mode 100644
index 0000000..7364916
--- /dev/null
+++ b/drivers/mempool/bucket/Makefile
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: BSD-3-Clause
+#
+# Copyright (c) 2017-2018 Solarflare Communications Inc.
+# All rights reserved.
+#
+# This software was jointly developed between OKTET Labs (under contract
+# for Solarflare) and Solarflare Communications, Inc.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_mempool_bucket.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+LDLIBS += -lrte_eal -lrte_mempool -lrte_ring
+
+EXPORT_MAP := rte_mempool_bucket_version.map
+
+LIBABIVER := 1
+
+SRCS-$(CONFIG_RTE_DRIVER_MEMPOOL_BUCKET) += rte_mempool_bucket.c
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/mempool/bucket/meson.build b/drivers/mempool/bucket/meson.build
new file mode 100644
index 0000000..618d791
--- /dev/null
+++ b/drivers/mempool/bucket/meson.build
@@ -0,0 +1,9 @@
+# SPDX-License-Identifier: BSD-3-Clause
+#
+# Copyright (c) 2017-2018 Solarflare Communications Inc.
+# All rights reserved.
+#
+# This software was jointly developed between OKTET Labs (under contract
+# for Solarflare) and Solarflare Communications, Inc.
+
+sources = files('rte_mempool_bucket.c')
diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c
new file mode 100644
index 0000000..5a1bd79
--- /dev/null
+++ b/drivers/mempool/bucket/rte_mempool_bucket.c
@@ -0,0 +1,562 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2017-2018 Solarflare Communications Inc.
+ * All rights reserved.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+#include <stdbool.h>
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+
+/*
+ * The general idea of the bucket mempool driver is as follows.
+ * We keep track of physically contiguous groups (buckets) of objects
+ * of a certain size. Every such a group has a counter that is
+ * incremented every time an object from that group is enqueued.
+ * Until the bucket is full, no objects from it are eligible for allocation.
+ * If a request is made to dequeue a multiply of bucket size, it is
+ * satisfied by returning the whole buckets, instead of separate objects.
+ */
+
+
+struct bucket_header {
+	unsigned int lcore_id;
+	uint8_t fill_cnt;
+};
+
+struct bucket_stack {
+	unsigned int top;
+	unsigned int limit;
+	void *objects[];
+};
+
+struct bucket_data {
+	unsigned int header_size;
+	unsigned int total_elt_size;
+	unsigned int obj_per_bucket;
+	uintptr_t bucket_page_mask;
+	struct rte_ring *shared_bucket_ring;
+	struct bucket_stack *buckets[RTE_MAX_LCORE];
+	/*
+	 * Multi-producer single-consumer ring to hold objects that are
+	 * returned to the mempool at a different lcore than initially
+	 * dequeued
+	 */
+	struct rte_ring *adoption_buffer_rings[RTE_MAX_LCORE];
+	struct rte_ring *shared_orphan_ring;
+	struct rte_mempool *pool;
+	unsigned int bucket_mem_size;
+};
+
+static struct bucket_stack *
+bucket_stack_create(const struct rte_mempool *mp, unsigned int n_elts)
+{
+	struct bucket_stack *stack;
+
+	stack = rte_zmalloc_socket("bucket_stack",
+				   sizeof(struct bucket_stack) +
+				   n_elts * sizeof(void *),
+				   RTE_CACHE_LINE_SIZE,
+				   mp->socket_id);
+	if (stack == NULL)
+		return NULL;
+	stack->limit = n_elts;
+	stack->top = 0;
+
+	return stack;
+}
+
+static void
+bucket_stack_push(struct bucket_stack *stack, void *obj)
+{
+	RTE_ASSERT(stack->top < stack->limit);
+	stack->objects[stack->top++] = obj;
+}
+
+static void *
+bucket_stack_pop_unsafe(struct bucket_stack *stack)
+{
+	RTE_ASSERT(stack->top > 0);
+	return stack->objects[--stack->top];
+}
+
+static void *
+bucket_stack_pop(struct bucket_stack *stack)
+{
+	if (stack->top == 0)
+		return NULL;
+	return bucket_stack_pop_unsafe(stack);
+}
+
+static int
+bucket_enqueue_single(struct bucket_data *bd, void *obj)
+{
+	int rc = 0;
+	uintptr_t addr = (uintptr_t)obj;
+	struct bucket_header *hdr;
+	unsigned int lcore_id = rte_lcore_id();
+
+	addr &= bd->bucket_page_mask;
+	hdr = (struct bucket_header *)addr;
+
+	if (likely(hdr->lcore_id == lcore_id)) {
+		if (hdr->fill_cnt < bd->obj_per_bucket - 1) {
+			hdr->fill_cnt++;
+		} else {
+			hdr->fill_cnt = 0;
+			/* Stack is big enough to put all buckets */
+			bucket_stack_push(bd->buckets[lcore_id], hdr);
+		}
+	} else if (hdr->lcore_id != LCORE_ID_ANY) {
+		struct rte_ring *adopt_ring =
+			bd->adoption_buffer_rings[hdr->lcore_id];
+
+		rc = rte_ring_enqueue(adopt_ring, obj);
+		/* Ring is big enough to put all objects */
+		RTE_ASSERT(rc == 0);
+	} else if (hdr->fill_cnt < bd->obj_per_bucket - 1) {
+		hdr->fill_cnt++;
+	} else {
+		hdr->fill_cnt = 0;
+		rc = rte_ring_enqueue(bd->shared_bucket_ring, hdr);
+		/* Ring is big enough to put all buckets */
+		RTE_ASSERT(rc == 0);
+	}
+
+	return rc;
+}
+
+static int
+bucket_enqueue(struct rte_mempool *mp, void * const *obj_table,
+	       unsigned int n)
+{
+	struct bucket_data *bd = mp->pool_data;
+	unsigned int i;
+	int rc = 0;
+
+	for (i = 0; i < n; i++) {
+		rc = bucket_enqueue_single(bd, obj_table[i]);
+		RTE_ASSERT(rc == 0);
+	}
+	return rc;
+}
+
+static void **
+bucket_fill_obj_table(const struct bucket_data *bd, void **pstart,
+		      void **obj_table, unsigned int n)
+{
+	unsigned int i;
+	uint8_t *objptr = *pstart;
+
+	for (objptr += bd->header_size, i = 0; i < n;
+	     i++, objptr += bd->total_elt_size)
+		*obj_table++ = objptr;
+	*pstart = objptr;
+	return obj_table;
+}
+
+static int
+bucket_dequeue_orphans(struct bucket_data *bd, void **obj_table,
+		       unsigned int n_orphans)
+{
+	unsigned int i;
+	int rc;
+	uint8_t *objptr;
+
+	rc = rte_ring_dequeue_bulk(bd->shared_orphan_ring, obj_table,
+				   n_orphans, NULL);
+	if (unlikely(rc != (int)n_orphans)) {
+		struct bucket_header *hdr;
+
+		objptr = bucket_stack_pop(bd->buckets[rte_lcore_id()]);
+		hdr = (struct bucket_header *)objptr;
+
+		if (objptr == NULL) {
+			rc = rte_ring_dequeue(bd->shared_bucket_ring,
+					      (void **)&objptr);
+			if (rc != 0) {
+				rte_errno = ENOBUFS;
+				return -rte_errno;
+			}
+			hdr = (struct bucket_header *)objptr;
+			hdr->lcore_id = rte_lcore_id();
+		}
+		hdr->fill_cnt = 0;
+		bucket_fill_obj_table(bd, (void **)&objptr, obj_table,
+				      n_orphans);
+		for (i = n_orphans; i < bd->obj_per_bucket; i++,
+			     objptr += bd->total_elt_size) {
+			rc = rte_ring_enqueue(bd->shared_orphan_ring,
+					      objptr);
+			if (rc != 0) {
+				RTE_ASSERT(0);
+				rte_errno = -rc;
+				return rc;
+			}
+		}
+	}
+
+	return 0;
+}
+
+static int
+bucket_dequeue_buckets(struct bucket_data *bd, void **obj_table,
+		       unsigned int n_buckets)
+{
+	struct bucket_stack *cur_stack = bd->buckets[rte_lcore_id()];
+	unsigned int n_buckets_from_stack = RTE_MIN(n_buckets, cur_stack->top);
+	void **obj_table_base = obj_table;
+
+	n_buckets -= n_buckets_from_stack;
+	while (n_buckets_from_stack-- > 0) {
+		void *obj = bucket_stack_pop_unsafe(cur_stack);
+
+		obj_table = bucket_fill_obj_table(bd, &obj, obj_table,
+						  bd->obj_per_bucket);
+	}
+	while (n_buckets-- > 0) {
+		struct bucket_header *hdr;
+
+		if (unlikely(rte_ring_dequeue(bd->shared_bucket_ring,
+					      (void **)&hdr) != 0)) {
+			/*
+			 * Return the already-dequeued buffers
+			 * back to the mempool
+			 */
+			bucket_enqueue(bd->pool, obj_table_base,
+				       obj_table - obj_table_base);
+			rte_errno = ENOBUFS;
+			return -rte_errno;
+		}
+		hdr->lcore_id = rte_lcore_id();
+		obj_table = bucket_fill_obj_table(bd, (void **)&hdr,
+						  obj_table,
+						  bd->obj_per_bucket);
+	}
+
+	return 0;
+}
+
+static int
+bucket_adopt_orphans(struct bucket_data *bd)
+{
+	int rc = 0;
+	struct rte_ring *adopt_ring =
+		bd->adoption_buffer_rings[rte_lcore_id()];
+
+	if (unlikely(!rte_ring_empty(adopt_ring))) {
+		void *orphan;
+
+		while (rte_ring_sc_dequeue(adopt_ring, &orphan) == 0) {
+			rc = bucket_enqueue_single(bd, orphan);
+			RTE_ASSERT(rc == 0);
+		}
+	}
+	return rc;
+}
+
+static int
+bucket_dequeue(struct rte_mempool *mp, void **obj_table, unsigned int n)
+{
+	struct bucket_data *bd = mp->pool_data;
+	unsigned int n_buckets = n / bd->obj_per_bucket;
+	unsigned int n_orphans = n - n_buckets * bd->obj_per_bucket;
+	int rc = 0;
+
+	bucket_adopt_orphans(bd);
+
+	if (unlikely(n_orphans > 0)) {
+		rc = bucket_dequeue_orphans(bd, obj_table +
+					    (n_buckets * bd->obj_per_bucket),
+					    n_orphans);
+		if (rc != 0)
+			return rc;
+	}
+
+	if (likely(n_buckets > 0)) {
+		rc = bucket_dequeue_buckets(bd, obj_table, n_buckets);
+		if (unlikely(rc != 0) && n_orphans > 0) {
+			rte_ring_enqueue_bulk(bd->shared_orphan_ring,
+					      obj_table + (n_buckets *
+							   bd->obj_per_bucket),
+					      n_orphans, NULL);
+		}
+	}
+
+	return rc;
+}
+
+static void
+count_underfilled_buckets(struct rte_mempool *mp,
+			  void *opaque,
+			  struct rte_mempool_memhdr *memhdr,
+			  __rte_unused unsigned int mem_idx)
+{
+	unsigned int *pcount = opaque;
+	const struct bucket_data *bd = mp->pool_data;
+	unsigned int bucket_page_sz =
+		(unsigned int)(~bd->bucket_page_mask + 1);
+	uintptr_t align;
+	uint8_t *iter;
+
+	align = (uintptr_t)RTE_PTR_ALIGN_CEIL(memhdr->addr, bucket_page_sz) -
+		(uintptr_t)memhdr->addr;
+
+	for (iter = (uint8_t *)memhdr->addr + align;
+	     iter < (uint8_t *)memhdr->addr + memhdr->len;
+	     iter += bucket_page_sz) {
+		struct bucket_header *hdr = (struct bucket_header *)iter;
+
+		*pcount += hdr->fill_cnt;
+	}
+}
+
+static unsigned int
+bucket_get_count(const struct rte_mempool *mp)
+{
+	const struct bucket_data *bd = mp->pool_data;
+	unsigned int count =
+		bd->obj_per_bucket * rte_ring_count(bd->shared_bucket_ring) +
+		rte_ring_count(bd->shared_orphan_ring);
+	unsigned int i;
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		if (!rte_lcore_is_enabled(i))
+			continue;
+		count += bd->obj_per_bucket * bd->buckets[i]->top;
+	}
+
+	rte_mempool_mem_iter((struct rte_mempool *)(uintptr_t)mp,
+			     count_underfilled_buckets, &count);
+
+	return count;
+}
+
+static int
+bucket_alloc(struct rte_mempool *mp)
+{
+	int rg_flags = 0;
+	int rc = 0;
+	char rg_name[RTE_RING_NAMESIZE];
+	struct bucket_data *bd;
+	unsigned int i;
+	unsigned int bucket_header_size;
+
+	bd = rte_zmalloc_socket("bucket_pool", sizeof(*bd),
+				RTE_CACHE_LINE_SIZE, mp->socket_id);
+	if (bd == NULL) {
+		rc = -ENOMEM;
+		goto no_mem_for_data;
+	}
+	bd->pool = mp;
+	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+		bucket_header_size = sizeof(struct bucket_header);
+	else
+		bucket_header_size = RTE_CACHE_LINE_SIZE;
+	RTE_BUILD_BUG_ON(sizeof(struct bucket_header) > RTE_CACHE_LINE_SIZE);
+	bd->header_size = mp->header_size + bucket_header_size;
+	bd->total_elt_size = mp->header_size + mp->elt_size + mp->trailer_size;
+	bd->bucket_mem_size = RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB * 1024;
+	bd->obj_per_bucket = (bd->bucket_mem_size - bucket_header_size) /
+		bd->total_elt_size;
+	bd->bucket_page_mask = ~(rte_align64pow2(bd->bucket_mem_size) - 1);
+
+	if (mp->flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (mp->flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		if (!rte_lcore_is_enabled(i))
+			continue;
+		bd->buckets[i] =
+			bucket_stack_create(mp, mp->size / bd->obj_per_bucket);
+		if (bd->buckets[i] == NULL) {
+			rc = -ENOMEM;
+			goto no_mem_for_stacks;
+		}
+		rc = snprintf(rg_name, sizeof(rg_name),
+			      RTE_MEMPOOL_MZ_FORMAT ".a%u", mp->name, i);
+		if (rc < 0 || rc >= (int)sizeof(rg_name)) {
+			rc = -ENAMETOOLONG;
+			goto no_mem_for_stacks;
+		}
+		bd->adoption_buffer_rings[i] =
+			rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+					mp->socket_id,
+					rg_flags | RING_F_SC_DEQ);
+		if (bd->adoption_buffer_rings[i] == NULL) {
+			rc = -rte_errno;
+			goto no_mem_for_stacks;
+		}
+	}
+
+	rc = snprintf(rg_name, sizeof(rg_name),
+		      RTE_MEMPOOL_MZ_FORMAT ".0", mp->name);
+	if (rc < 0 || rc >= (int)sizeof(rg_name)) {
+		rc = -ENAMETOOLONG;
+		goto invalid_shared_orphan_ring;
+	}
+	bd->shared_orphan_ring =
+		rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+				mp->socket_id, rg_flags);
+	if (bd->shared_orphan_ring == NULL) {
+		rc = -rte_errno;
+		goto cannot_create_shared_orphan_ring;
+	}
+
+	rc = snprintf(rg_name, sizeof(rg_name),
+		       RTE_MEMPOOL_MZ_FORMAT ".1", mp->name);
+	if (rc < 0 || rc >= (int)sizeof(rg_name)) {
+		rc = -ENAMETOOLONG;
+		goto invalid_shared_bucket_ring;
+	}
+	bd->shared_bucket_ring =
+		rte_ring_create(rg_name,
+				rte_align32pow2((mp->size + 1) /
+						bd->obj_per_bucket),
+				mp->socket_id, rg_flags);
+	if (bd->shared_bucket_ring == NULL) {
+		rc = -rte_errno;
+		goto cannot_create_shared_bucket_ring;
+	}
+
+	mp->pool_data = bd;
+
+	return 0;
+
+cannot_create_shared_bucket_ring:
+invalid_shared_bucket_ring:
+	rte_ring_free(bd->shared_orphan_ring);
+cannot_create_shared_orphan_ring:
+invalid_shared_orphan_ring:
+no_mem_for_stacks:
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		rte_free(bd->buckets[i]);
+		rte_ring_free(bd->adoption_buffer_rings[i]);
+	}
+	rte_free(bd);
+no_mem_for_data:
+	rte_errno = -rc;
+	return rc;
+}
+
+static void
+bucket_free(struct rte_mempool *mp)
+{
+	unsigned int i;
+	struct bucket_data *bd = mp->pool_data;
+
+	if (bd == NULL)
+		return;
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		rte_free(bd->buckets[i]);
+		rte_ring_free(bd->adoption_buffer_rings[i]);
+	}
+
+	rte_ring_free(bd->shared_orphan_ring);
+	rte_ring_free(bd->shared_bucket_ring);
+
+	rte_free(bd);
+}
+
+static ssize_t
+bucket_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num,
+		     __rte_unused uint32_t pg_shift, size_t *min_total_elt_size,
+		     size_t *align)
+{
+	struct bucket_data *bd = mp->pool_data;
+	unsigned int bucket_page_sz;
+
+	if (bd == NULL)
+		return -EINVAL;
+
+	bucket_page_sz = rte_align32pow2(bd->bucket_mem_size);
+	*align = bucket_page_sz;
+	*min_total_elt_size = bucket_page_sz;
+	/*
+	 * Each bucket occupies its own block aligned to
+	 * bucket_page_sz, so the required amount of memory is
+	 * a multiple of bucket_page_sz.
+	 * We also need extra space for a bucket header
+	 */
+	return ((obj_num + bd->obj_per_bucket - 1) /
+		bd->obj_per_bucket) * bucket_page_sz;
+}
+
+static int
+bucket_populate(struct rte_mempool *mp, unsigned int max_objs,
+		void *vaddr, rte_iova_t iova, size_t len,
+		rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
+{
+	struct bucket_data *bd = mp->pool_data;
+	unsigned int bucket_page_sz;
+	unsigned int bucket_header_sz;
+	unsigned int n_objs;
+	uintptr_t align;
+	uint8_t *iter;
+	int rc;
+
+	if (bd == NULL)
+		return -EINVAL;
+
+	bucket_page_sz = rte_align32pow2(bd->bucket_mem_size);
+	align = RTE_PTR_ALIGN_CEIL((uintptr_t)vaddr, bucket_page_sz) -
+		(uintptr_t)vaddr;
+
+	bucket_header_sz = bd->header_size - mp->header_size;
+	if (iova != RTE_BAD_IOVA)
+		iova += align + bucket_header_sz;
+
+	for (iter = (uint8_t *)vaddr + align, n_objs = 0;
+	     iter < (uint8_t *)vaddr + len && n_objs < max_objs;
+	     iter += bucket_page_sz) {
+		struct bucket_header *hdr = (struct bucket_header *)iter;
+		unsigned int chunk_len = bd->bucket_mem_size;
+
+		if ((size_t)(iter - (uint8_t *)vaddr) + chunk_len > len)
+			chunk_len = len - (iter - (uint8_t *)vaddr);
+		if (chunk_len <= bucket_header_sz)
+			break;
+		chunk_len -= bucket_header_sz;
+
+		hdr->fill_cnt = 0;
+		hdr->lcore_id = LCORE_ID_ANY;
+		rc = rte_mempool_op_populate_default(mp,
+						     RTE_MIN(bd->obj_per_bucket,
+							     max_objs - n_objs),
+						     iter + bucket_header_sz,
+						     iova, chunk_len,
+						     obj_cb, obj_cb_arg);
+		if (rc < 0)
+			return rc;
+		n_objs += rc;
+		if (iova != RTE_BAD_IOVA)
+			iova += bucket_page_sz;
+	}
+
+	return n_objs;
+}
+
+static const struct rte_mempool_ops ops_bucket = {
+	.name = "bucket",
+	.alloc = bucket_alloc,
+	.free = bucket_free,
+	.enqueue = bucket_enqueue,
+	.dequeue = bucket_dequeue,
+	.get_count = bucket_get_count,
+	.calc_mem_size = bucket_calc_mem_size,
+	.populate = bucket_populate,
+};
+
+
+MEMPOOL_REGISTER_OPS(ops_bucket);
diff --git a/drivers/mempool/bucket/rte_mempool_bucket_version.map b/drivers/mempool/bucket/rte_mempool_bucket_version.map
new file mode 100644
index 0000000..9b9ab1a
--- /dev/null
+++ b/drivers/mempool/bucket/rte_mempool_bucket_version.map
@@ -0,0 +1,4 @@
+DPDK_18.05 {
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 94525dc..99f8103 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -121,6 +121,7 @@ endif
 ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),n)
 # plugins (link only if static libraries)
 
+_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_BUCKET) += -lrte_mempool_bucket
 _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_STACK)  += -lrte_mempool_stack
 ifeq ($(CONFIG_RTE_LIBRTE_DPAA_BUS),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL)   += -lrte_mempool_dpaa
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v1 2/6] mempool: implement abstract mempool info API
  2018-03-26 16:12   ` [PATCH v1 0/6] mempool: add bucket driver Andrew Rybchenko
  2018-03-26 16:12     ` [PATCH v1 1/6] mempool/bucket: implement bucket mempool manager Andrew Rybchenko
@ 2018-03-26 16:12     ` Andrew Rybchenko
  2018-04-19 16:42       ` Olivier Matz
  2018-03-26 16:12     ` [PATCH v1 3/6] mempool: support block dequeue operation Andrew Rybchenko
                       ` (4 subsequent siblings)
  6 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-26 16:12 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Primarily, it is intended as a way for the mempool driver to provide
additional information on how it lays up objects inside the mempool.

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 lib/librte_mempool/rte_mempool.h           | 41 ++++++++++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c       | 15 +++++++++++
 lib/librte_mempool/rte_mempool_version.map |  7 +++++
 3 files changed, 63 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 3e06ae0..1ac2f57 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -190,6 +190,14 @@ struct rte_mempool_memhdr {
 };
 
 /**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Additional information about the mempool
+ */
+struct rte_mempool_info;
+
+/**
  * The RTE mempool structure.
  */
 struct rte_mempool {
@@ -499,6 +507,16 @@ int rte_mempool_op_populate_default(struct rte_mempool *mp,
 		void *vaddr, rte_iova_t iova, size_t len,
 		rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Get some additional information about a mempool.
+ */
+typedef int (*rte_mempool_get_info_t)(const struct rte_mempool *mp,
+		struct rte_mempool_info *info);
+
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -517,6 +535,10 @@ struct rte_mempool_ops {
 	 * provided memory chunk.
 	 */
 	rte_mempool_populate_t populate;
+	/**
+	 * Get mempool info
+	 */
+	rte_mempool_get_info_t get_info;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -680,6 +702,25 @@ int rte_mempool_ops_populate(struct rte_mempool *mp, unsigned int max_objs,
 			     void *obj_cb_arg);
 
 /**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Wrapper for mempool_ops get_info callback.
+ *
+ * @param[in] mp
+ *   Pointer to the memory pool.
+ * @param[out] info
+ *   Pointer to the rte_mempool_info structure
+ * @return
+ *   - 0: Success; The mempool driver supports retrieving supplementary
+ *        mempool information
+ *   - -ENOTSUP - doesn't support get_info ops (valid case).
+ */
+__rte_experimental
+int rte_mempool_ops_get_info(const struct rte_mempool *mp,
+			 struct rte_mempool_info *info);
+
+/**
  * @internal wrapper for mempool_ops free callback.
  *
  * @param mp
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index ea9be1e..efc1c08 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -59,6 +59,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->get_count = h->get_count;
 	ops->calc_mem_size = h->calc_mem_size;
 	ops->populate = h->populate;
+	ops->get_info = h->get_info;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -134,6 +135,20 @@ rte_mempool_ops_populate(struct rte_mempool *mp, unsigned int max_objs,
 			     obj_cb_arg);
 }
 
+/* wrapper to get additional mempool info */
+int
+rte_mempool_ops_get_info(const struct rte_mempool *mp,
+			 struct rte_mempool_info *info)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->get_info, -ENOTSUP);
+	return ops->get_info(mp, info);
+}
+
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index cf375db..c9d16ec 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -57,3 +57,10 @@ DPDK_18.05 {
 	rte_mempool_op_populate_default;
 
 } DPDK_17.11;
+
+EXPERIMENTAL {
+	global:
+
+	rte_mempool_ops_get_info;
+
+} DPDK_18.05;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v1 3/6] mempool: support block dequeue operation
  2018-03-26 16:12   ` [PATCH v1 0/6] mempool: add bucket driver Andrew Rybchenko
  2018-03-26 16:12     ` [PATCH v1 1/6] mempool/bucket: implement bucket mempool manager Andrew Rybchenko
  2018-03-26 16:12     ` [PATCH v1 2/6] mempool: implement abstract mempool info API Andrew Rybchenko
@ 2018-03-26 16:12     ` Andrew Rybchenko
  2018-04-19 16:41       ` Olivier Matz
  2018-03-26 16:12     ` [PATCH v1 4/6] mempool/bucket: implement " Andrew Rybchenko
                       ` (3 subsequent siblings)
  6 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-26 16:12 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

If mempool manager supports object blocks (physically and virtual
contiguous set of objects), it is sufficient to get the first
object only and the function allows to avoid filling in of
information about each block member.

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 doc/guides/rel_notes/deprecation.rst       |   7 --
 lib/librte_mempool/Makefile                |   1 +
 lib/librte_mempool/meson.build             |   2 +
 lib/librte_mempool/rte_mempool.c           |  39 ++++++++
 lib/librte_mempool/rte_mempool.h           | 151 ++++++++++++++++++++++++++++-
 lib/librte_mempool/rte_mempool_ops.c       |   1 +
 lib/librte_mempool/rte_mempool_version.map |   1 +
 7 files changed, 194 insertions(+), 8 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 5301259..8249638 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -59,13 +59,6 @@ Deprecation Notices
 
   - ``rte_eal_mbuf_default_mempool_ops``
 
-* mempool: several API and ABI changes are planned in v18.05.
-
-  The following changes are planned:
-
-  - addition of new op to allocate contiguous
-    block of objects if underlying driver supports it.
-
 * mbuf: The control mbuf API will be removed in v18.05. The impacted
   functions and macros are:
 
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 2c46fdd..62dd1a4 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -10,6 +10,7 @@ CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
 # Allow deprecated symbol to use deprecated rte_mempool_populate_iova_tab()
 # from earlier deprecated rte_mempool_populate_phys_tab()
 CFLAGS += -Wno-deprecated-declarations
+CFLAGS += -DALLOW_EXPERIMENTAL_API
 LDLIBS += -lrte_eal -lrte_ring
 
 EXPORT_MAP := rte_mempool_version.map
diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build
index 22e912a..8ef88e3 100644
--- a/lib/librte_mempool/meson.build
+++ b/lib/librte_mempool/meson.build
@@ -1,6 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2017 Intel Corporation
 
+allow_experimental_apis = true
+
 extra_flags = []
 
 # Allow deprecated symbol to use deprecated rte_mempool_populate_iova_tab()
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index c58bcc6..79f8429 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -1125,6 +1125,36 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 #endif
 }
 
+void
+rte_mempool_contig_blocks_check_cookies(const struct rte_mempool *mp,
+	void * const *first_obj_table_const, unsigned int n, int free)
+{
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	struct rte_mempool_info info;
+	const size_t total_elt_sz =
+		mp->header_size + mp->elt_size + mp->trailer_size;
+	unsigned int i, j;
+
+	rte_mempool_ops_get_info(mp, &info);
+
+	for (i = 0; i < n; ++i) {
+		void *first_obj = first_obj_table_const[i];
+
+		for (j = 0; j < info.contig_block_size; ++j) {
+			void *obj;
+
+			obj = (void *)((uintptr_t)first_obj + j * total_elt_sz);
+			rte_mempool_check_cookies(mp, &obj, 1, free);
+		}
+	}
+#else
+	RTE_SET_USED(mp);
+	RTE_SET_USED(first_obj_table_const);
+	RTE_SET_USED(n);
+	RTE_SET_USED(free);
+#endif
+}
+
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 static void
 mempool_obj_audit(struct rte_mempool *mp, __rte_unused void *opaque,
@@ -1190,6 +1220,7 @@ void
 rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 {
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	struct rte_mempool_info info;
 	struct rte_mempool_debug_stats sum;
 	unsigned lcore_id;
 #endif
@@ -1231,6 +1262,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 
 	/* sum and dump statistics */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	rte_mempool_ops_get_info(mp, &info);
 	memset(&sum, 0, sizeof(sum));
 	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
 		sum.put_bulk += mp->stats[lcore_id].put_bulk;
@@ -1239,6 +1271,8 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 		sum.get_success_objs += mp->stats[lcore_id].get_success_objs;
 		sum.get_fail_bulk += mp->stats[lcore_id].get_fail_bulk;
 		sum.get_fail_objs += mp->stats[lcore_id].get_fail_objs;
+		sum.get_success_blks += mp->stats[lcore_id].get_success_blks;
+		sum.get_fail_blks += mp->stats[lcore_id].get_fail_blks;
 	}
 	fprintf(f, "  stats:\n");
 	fprintf(f, "    put_bulk=%"PRIu64"\n", sum.put_bulk);
@@ -1247,6 +1281,11 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 	fprintf(f, "    get_success_objs=%"PRIu64"\n", sum.get_success_objs);
 	fprintf(f, "    get_fail_bulk=%"PRIu64"\n", sum.get_fail_bulk);
 	fprintf(f, "    get_fail_objs=%"PRIu64"\n", sum.get_fail_objs);
+	if (info.contig_block_size > 0) {
+		fprintf(f, "    get_success_blks=%"PRIu64"\n",
+			sum.get_success_blks);
+		fprintf(f, "    get_fail_blks=%"PRIu64"\n", sum.get_fail_blks);
+	}
 #else
 	fprintf(f, "  no statistics available\n");
 #endif
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 1ac2f57..3cab3a0 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -70,6 +70,10 @@ struct rte_mempool_debug_stats {
 	uint64_t get_success_objs; /**< Objects successfully allocated. */
 	uint64_t get_fail_bulk;    /**< Failed allocation number. */
 	uint64_t get_fail_objs;    /**< Objects that failed to be allocated. */
+	/** Successful allocation number of contiguous blocks. */
+	uint64_t get_success_blks;
+	/** Failed allocation number of contiguous blocks. */
+	uint64_t get_fail_blks;
 } __rte_cache_aligned;
 #endif
 
@@ -195,7 +199,10 @@ struct rte_mempool_memhdr {
  *
  * Additional information about the mempool
  */
-struct rte_mempool_info;
+struct rte_mempool_info {
+	/** Number of objects in the contiguous block */
+	unsigned int contig_block_size;
+};
 
 /**
  * The RTE mempool structure.
@@ -273,8 +280,16 @@ struct rte_mempool {
 			mp->stats[__lcore_id].name##_bulk += 1;	\
 		}                                               \
 	} while(0)
+#define __MEMPOOL_CONTIG_BLOCKS_STAT_ADD(mp, name, n) do {                    \
+		unsigned int __lcore_id = rte_lcore_id();       \
+		if (__lcore_id < RTE_MAX_LCORE) {               \
+			mp->stats[__lcore_id].name##_blks += n;	\
+			mp->stats[__lcore_id].name##_bulk += 1;	\
+		}                                               \
+	} while (0)
 #else
 #define __MEMPOOL_STAT_ADD(mp, name, n) do {} while(0)
+#define __MEMPOOL_CONTIG_BLOCKS_STAT_ADD(mp, name, n) do {} while (0)
 #endif
 
 /**
@@ -342,6 +357,38 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 #define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
 #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * @internal Check contiguous object blocks and update cookies or panic.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param first_obj_table_const
+ *   Pointer to a table of void * pointers (first object of the contiguous
+ *   object blocks).
+ * @param n
+ *   Number of contiguous object blocks.
+ * @param free
+ *   - 0: object is supposed to be allocated, mark it as free
+ *   - 1: object is supposed to be free, mark it as allocated
+ *   - 2: just check that cookie is valid (free or allocated)
+ */
+void rte_mempool_contig_blocks_check_cookies(const struct rte_mempool *mp,
+	void * const *first_obj_table_const, unsigned int n, int free);
+
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+#define __mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \
+					      free) \
+	rte_mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \
+						free)
+#else
+#define __mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \
+					      free) \
+	do {} while (0)
+#endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
+
 #define RTE_MEMPOOL_OPS_NAMESIZE 32 /**< Max length of ops struct name. */
 
 /**
@@ -374,6 +421,15 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
 		void **obj_table, unsigned int n);
 
 /**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Dequeue a number of contiquous object blocks from the external pool.
+ */
+typedef int (*rte_mempool_dequeue_contig_blocks_t)(struct rte_mempool *mp,
+		 void **first_obj_table, unsigned int n);
+
+/**
  * Return the number of available objects in the external pool.
  */
 typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
@@ -539,6 +595,10 @@ struct rte_mempool_ops {
 	 * Get mempool info
 	 */
 	rte_mempool_get_info_t get_info;
+	/**
+	 * Dequeue a number of contiguous object blocks.
+	 */
+	rte_mempool_dequeue_contig_blocks_t dequeue_contig_blocks;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -617,6 +677,30 @@ rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
 }
 
 /**
+ * @internal Wrapper for mempool_ops dequeue_contig_blocks callback.
+ *
+ * @param[in] mp
+ *   Pointer to the memory pool.
+ * @param[out] first_obj_table
+ *   Pointer to a table of void * pointers (first objects).
+ * @param[in] n
+ *   Number of blocks to get.
+ * @return
+ *   - 0: Success; got n objects.
+ *   - <0: Error; code of dequeue function.
+ */
+static inline int
+rte_mempool_ops_dequeue_contig_blocks(struct rte_mempool *mp,
+		void **first_obj_table, unsigned int n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+	RTE_ASSERT(ops->dequeue_contig_blocks != NULL);
+	return ops->dequeue_contig_blocks(mp, first_obj_table, n);
+}
+
+/**
  * @internal wrapper for mempool_ops enqueue callback.
  *
  * @param mp
@@ -1531,6 +1615,71 @@ rte_mempool_get(struct rte_mempool *mp, void **obj_p)
 }
 
 /**
+ * @internal Get contiguous blocks of objects from the pool. Used internally.
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param first_obj_table
+ *   A pointer to a pointer to the first object in each block.
+ * @param n
+ *   A number of blocks to get.
+ * @return
+ *   - >0: Success
+ *   - <0: Error
+ */
+static __rte_always_inline int
+__mempool_generic_get_contig_blocks(struct rte_mempool *mp,
+				    void **first_obj_table, unsigned int n)
+{
+	int ret;
+
+	ret = rte_mempool_ops_dequeue_contig_blocks(mp, first_obj_table, n);
+	if (ret < 0)
+		__MEMPOOL_CONTIG_BLOCKS_STAT_ADD(mp, get_fail, n);
+	else
+		__MEMPOOL_CONTIG_BLOCKS_STAT_ADD(mp, get_success, n);
+
+	return ret;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Get a contiguous blocks of objects from the mempool.
+ *
+ * If cache is enabled, consider to flush it first, to reuse objects
+ * as soon as possible.
+ *
+ * The application should check that the driver supports the operation
+ * by calling rte_mempool_ops_get_info() and checking that `contig_block_size`
+ * is not zero.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param first_obj_table
+ *   A pointer to a pointer to the first object in each block.
+ * @param n
+ *   The number of blocks to get from mempool.
+ * @return
+ *   - 0: Success; blocks taken.
+ *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
+ *   - -EOPNOTSUPP: The mempool driver does not support block dequeue
+ */
+static __rte_always_inline int
+__rte_experimental
+rte_mempool_get_contig_blocks(struct rte_mempool *mp,
+			      void **first_obj_table, unsigned int n)
+{
+	int ret;
+
+	ret = __mempool_generic_get_contig_blocks(mp, first_obj_table, n);
+	if (ret == 0)
+		__mempool_contig_blocks_check_cookies(mp, first_obj_table, n,
+						      1);
+	return ret;
+}
+
+/**
  * Return the number of entries in the mempool.
  *
  * When cache is enabled, this function has to browse the length of
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index efc1c08..a27e1fa 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -60,6 +60,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->calc_mem_size = h->calc_mem_size;
 	ops->populate = h->populate;
 	ops->get_info = h->get_info;
+	ops->dequeue_contig_blocks = h->dequeue_contig_blocks;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index c9d16ec..1c406b5 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -53,6 +53,7 @@ DPDK_17.11 {
 DPDK_18.05 {
 	global:
 
+	rte_mempool_contig_blocks_check_cookies;
 	rte_mempool_op_calc_mem_size_default;
 	rte_mempool_op_populate_default;
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v1 4/6] mempool/bucket: implement block dequeue operation
  2018-03-26 16:12   ` [PATCH v1 0/6] mempool: add bucket driver Andrew Rybchenko
                       ` (2 preceding siblings ...)
  2018-03-26 16:12     ` [PATCH v1 3/6] mempool: support block dequeue operation Andrew Rybchenko
@ 2018-03-26 16:12     ` Andrew Rybchenko
  2018-03-26 16:12     ` [PATCH v1 5/6] mempool/bucket: do not allow one lcore to grab all buckets Andrew Rybchenko
                       ` (2 subsequent siblings)
  6 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-26 16:12 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/mempool/bucket/rte_mempool_bucket.c | 52 +++++++++++++++++++++++++++++
 1 file changed, 52 insertions(+)

diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c
index 5a1bd79..0365671 100644
--- a/drivers/mempool/bucket/rte_mempool_bucket.c
+++ b/drivers/mempool/bucket/rte_mempool_bucket.c
@@ -294,6 +294,46 @@ bucket_dequeue(struct rte_mempool *mp, void **obj_table, unsigned int n)
 	return rc;
 }
 
+static int
+bucket_dequeue_contig_blocks(struct rte_mempool *mp, void **first_obj_table,
+			     unsigned int n)
+{
+	struct bucket_data *bd = mp->pool_data;
+	const uint32_t header_size = bd->header_size;
+	struct bucket_stack *cur_stack = bd->buckets[rte_lcore_id()];
+	unsigned int n_buckets_from_stack = RTE_MIN(n, cur_stack->top);
+	struct bucket_header *hdr;
+	void **first_objp = first_obj_table;
+
+	bucket_adopt_orphans(bd);
+
+	n -= n_buckets_from_stack;
+	while (n_buckets_from_stack-- > 0) {
+		hdr = bucket_stack_pop_unsafe(cur_stack);
+		*first_objp++ = (uint8_t *)hdr + header_size;
+	}
+	if (n > 0) {
+		if (unlikely(rte_ring_dequeue_bulk(bd->shared_bucket_ring,
+						   first_objp, n, NULL) != n)) {
+			/* Return the already dequeued buckets */
+			while (first_objp-- != first_obj_table) {
+				bucket_stack_push(cur_stack,
+						  (uint8_t *)*first_objp -
+						  header_size);
+			}
+			rte_errno = ENOBUFS;
+			return -rte_errno;
+		}
+		while (n-- > 0) {
+			hdr = (struct bucket_header *)*first_objp;
+			hdr->lcore_id = rte_lcore_id();
+			*first_objp++ = (uint8_t *)hdr + header_size;
+		}
+	}
+
+	return 0;
+}
+
 static void
 count_underfilled_buckets(struct rte_mempool *mp,
 			  void *opaque,
@@ -547,6 +587,16 @@ bucket_populate(struct rte_mempool *mp, unsigned int max_objs,
 	return n_objs;
 }
 
+static int
+bucket_get_info(const struct rte_mempool *mp, struct rte_mempool_info *info)
+{
+	struct bucket_data *bd = mp->pool_data;
+
+	info->contig_block_size = bd->obj_per_bucket;
+	return 0;
+}
+
+
 static const struct rte_mempool_ops ops_bucket = {
 	.name = "bucket",
 	.alloc = bucket_alloc,
@@ -556,6 +606,8 @@ static const struct rte_mempool_ops ops_bucket = {
 	.get_count = bucket_get_count,
 	.calc_mem_size = bucket_calc_mem_size,
 	.populate = bucket_populate,
+	.get_info = bucket_get_info,
+	.dequeue_contig_blocks = bucket_dequeue_contig_blocks,
 };
 
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v1 5/6] mempool/bucket: do not allow one lcore to grab all buckets
  2018-03-26 16:12   ` [PATCH v1 0/6] mempool: add bucket driver Andrew Rybchenko
                       ` (3 preceding siblings ...)
  2018-03-26 16:12     ` [PATCH v1 4/6] mempool/bucket: implement " Andrew Rybchenko
@ 2018-03-26 16:12     ` Andrew Rybchenko
  2018-03-26 16:12     ` [PATCH v1 6/6] doc: advertise bucket mempool driver Andrew Rybchenko
  2018-04-19 16:41     ` [PATCH v1 0/6] mempool: add bucket driver Olivier Matz
  6 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-26 16:12 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/mempool/bucket/rte_mempool_bucket.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c
index 0365671..6c2da1c 100644
--- a/drivers/mempool/bucket/rte_mempool_bucket.c
+++ b/drivers/mempool/bucket/rte_mempool_bucket.c
@@ -42,6 +42,7 @@ struct bucket_data {
 	unsigned int header_size;
 	unsigned int total_elt_size;
 	unsigned int obj_per_bucket;
+	unsigned int bucket_stack_thresh;
 	uintptr_t bucket_page_mask;
 	struct rte_ring *shared_bucket_ring;
 	struct bucket_stack *buckets[RTE_MAX_LCORE];
@@ -139,6 +140,7 @@ bucket_enqueue(struct rte_mempool *mp, void * const *obj_table,
 	       unsigned int n)
 {
 	struct bucket_data *bd = mp->pool_data;
+	struct bucket_stack *local_stack = bd->buckets[rte_lcore_id()];
 	unsigned int i;
 	int rc = 0;
 
@@ -146,6 +148,15 @@ bucket_enqueue(struct rte_mempool *mp, void * const *obj_table,
 		rc = bucket_enqueue_single(bd, obj_table[i]);
 		RTE_ASSERT(rc == 0);
 	}
+	if (local_stack->top > bd->bucket_stack_thresh) {
+		rte_ring_enqueue_bulk(bd->shared_bucket_ring,
+				      &local_stack->objects
+				      [bd->bucket_stack_thresh],
+				      local_stack->top -
+				      bd->bucket_stack_thresh,
+				      NULL);
+	    local_stack->top = bd->bucket_stack_thresh;
+	}
 	return rc;
 }
 
@@ -408,6 +419,8 @@ bucket_alloc(struct rte_mempool *mp)
 	bd->obj_per_bucket = (bd->bucket_mem_size - bucket_header_size) /
 		bd->total_elt_size;
 	bd->bucket_page_mask = ~(rte_align64pow2(bd->bucket_mem_size) - 1);
+	/* eventually this should be a tunable parameter */
+	bd->bucket_stack_thresh = (mp->size / bd->obj_per_bucket) * 4 / 3;
 
 	if (mp->flags & MEMPOOL_F_SP_PUT)
 		rg_flags |= RING_F_SP_ENQ;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v1 6/6] doc: advertise bucket mempool driver
  2018-03-26 16:12   ` [PATCH v1 0/6] mempool: add bucket driver Andrew Rybchenko
                       ` (4 preceding siblings ...)
  2018-03-26 16:12     ` [PATCH v1 5/6] mempool/bucket: do not allow one lcore to grab all buckets Andrew Rybchenko
@ 2018-03-26 16:12     ` Andrew Rybchenko
  2018-04-19 16:43       ` Olivier Matz
  2018-04-19 16:41     ` [PATCH v1 0/6] mempool: add bucket driver Olivier Matz
  6 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-03-26 16:12 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 doc/guides/rel_notes/release_18_05.rst | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index 016c4ed..c578364 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -52,6 +52,15 @@ New Features
   * Added support for NVGRE, VXLAN and GENEVE filters in flow API.
   * Added support for DROP action in flow API.
 
+* **Added bucket mempool driver.**
+
+  Added bucket mempool driver which provide a way to allocate contiguous
+  block of objects.
+  Number of objects in the block depends on how many objects fit in
+  RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB memory chunk which is build time option.
+  The number may be obtained using rte_mempool_ops_get_info() API.
+  Contiguous blocks may be allocated using rte_mempool_get_contig_blocks() API.
+
 
 API Changes
 -----------
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* Re: [PATCH v3 03/11] mempool: ensure the mempool is initialized before populating
  2018-03-26 16:09     ` [PATCH v3 03/11] mempool: ensure the mempool is initialized before populating Andrew Rybchenko
@ 2018-04-04 15:06       ` santosh
  2018-04-06 15:50       ` Olivier Matz
  1 sibling, 0 replies; 197+ messages in thread
From: santosh @ 2018-04-04 15:06 UTC (permalink / raw)
  To: Andrew Rybchenko, dev; +Cc: Olivier MATZ, Artem V. Andreev


On Monday 26 March 2018 09:39 PM, Andrew Rybchenko wrote:
> From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>
>
> Callback to calculate required memory area size may require mempool
> driver data to be already allocated and initialized.
>
> Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> ---

Acked-by: Santosh Shukla <Santosh.Shukla@caviumnetworks.com>

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v3 04/11] mempool: add op to calculate memory size to be allocated
  2018-03-26 16:09     ` [PATCH v3 04/11] mempool: add op to calculate memory size to be allocated Andrew Rybchenko
@ 2018-04-04 15:08       ` santosh
  2018-04-06 15:51       ` Olivier Matz
  2018-04-12 15:22       ` Burakov, Anatoly
  2 siblings, 0 replies; 197+ messages in thread
From: santosh @ 2018-04-04 15:08 UTC (permalink / raw)
  To: Andrew Rybchenko, dev; +Cc: Olivier MATZ


On Monday 26 March 2018 09:39 PM, Andrew Rybchenko wrote:
> Size of memory chunk required to populate mempool objects depends
> on how objects are stored in the memory. Different mempool drivers
> may have different requirements and a new operation allows to
> calculate memory size in accordance with driver requirements and
> advertise requirements on minimum memory chunk size and alignment
> in a generic way.
>
> Bump ABI version since the patch breaks it.
>
> Suggested-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> ---

Acked-by: Santosh Shukla <Santosh.Shukla@caviumnetworks.com>

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v3 05/11] mempool: add op to populate objects using provided memory
  2018-03-26 16:09     ` [PATCH v3 05/11] mempool: add op to populate objects using provided memory Andrew Rybchenko
@ 2018-04-04 15:09       ` santosh
  2018-04-06 15:51       ` Olivier Matz
  1 sibling, 0 replies; 197+ messages in thread
From: santosh @ 2018-04-04 15:09 UTC (permalink / raw)
  To: Andrew Rybchenko, dev; +Cc: Olivier MATZ


On Monday 26 March 2018 09:39 PM, Andrew Rybchenko wrote:
> The callback allows to customize how objects are stored in the
> memory chunk. Default implementation of the callback which simply
> puts objects one by one is available.
>
> Suggested-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> ---

Acked-by: Santosh Shukla <Santosh.Shukla@caviumnetworks.com>

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v3 06/11] mempool: remove callback to get capabilities
  2018-03-26 16:09     ` [PATCH v3 06/11] mempool: remove callback to get capabilities Andrew Rybchenko
@ 2018-04-04 15:10       ` santosh
  2018-04-06 15:51       ` Olivier Matz
  1 sibling, 0 replies; 197+ messages in thread
From: santosh @ 2018-04-04 15:10 UTC (permalink / raw)
  To: Andrew Rybchenko, dev; +Cc: Olivier MATZ, Jerin Jacob


On Monday 26 March 2018 09:39 PM, Andrew Rybchenko wrote:
> The callback was introduced to let generic code to know octeontx
> mempool driver requirements to use single physically contiguous
> memory chunk to store all objects and align object address to
> total object size. Now these requirements are met using a new
> callbacks to calculate required memory chunk size and to populate
> objects using provided memory chunk.
>
> These capability flags are not used anywhere else.
>
> Restricting capabilities to flags is not generic and likely to
> be insufficient to describe mempool driver features. If required
> in the future, API which returns structured information may be
> added.
>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> ---

Acked-by: Santosh Shukla <Santosh.Shukla@caviumnetworks.com>

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v3 08/11] mempool/octeontx: prepare to remove register memory area op
  2018-03-26 16:09     ` [PATCH v3 08/11] mempool/octeontx: prepare to remove register memory area op Andrew Rybchenko
@ 2018-04-04 15:12       ` santosh
  0 siblings, 0 replies; 197+ messages in thread
From: santosh @ 2018-04-04 15:12 UTC (permalink / raw)
  To: Andrew Rybchenko, dev; +Cc: Olivier MATZ, Jerin Jacob


On Monday 26 March 2018 09:39 PM, Andrew Rybchenko wrote:
> Callback to populate pool objects has all required information and
> executed a bit later than register memory area callback.
>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> ---

Acked-by: Santosh Shukla <Santosh.Shukla@caviumnetworks.com>

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v3 10/11] mempool: remove callback to register memory area
  2018-03-26 16:09     ` [PATCH v3 10/11] mempool: remove callback to register memory area Andrew Rybchenko
@ 2018-04-04 15:13       ` santosh
  2018-04-06 15:52       ` Olivier Matz
  1 sibling, 0 replies; 197+ messages in thread
From: santosh @ 2018-04-04 15:13 UTC (permalink / raw)
  To: Andrew Rybchenko, dev; +Cc: Olivier MATZ


On Monday 26 March 2018 09:39 PM, Andrew Rybchenko wrote:
> The callback is not required any more since there is a new callback
> to populate objects using provided memory area which provides
> the same information.
>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> ---

Acked-by: Santosh Shukla <Santosh.Shukla@caviumnetworks.com>

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v3 09/11] mempool/dpaa: prepare to remove register memory area op
  2018-03-26 16:09     ` [PATCH v3 09/11] mempool/dpaa: " Andrew Rybchenko
@ 2018-04-05  8:25       ` Hemant Agrawal
  0 siblings, 0 replies; 197+ messages in thread
From: Hemant Agrawal @ 2018-04-05  8:25 UTC (permalink / raw)
  To: Andrew Rybchenko, dev; +Cc: Olivier MATZ, Hemant Agrawal, Shreyansh Jain


On 3/26/2018 9:39 PM, Andrew Rybchenko wrote:
> Populate mempool driver callback is executed a bit later than
> register memory area, provides the same information and will
> substitute the later since it gives more flexibility and in addition
> to notification about memory area allows to customize how mempool
> objects are stored in memory.
> 
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> ---
> v2 -> v3:
>   - fix build error because of prototype mismatch (char * -> void *)
> 
> v1 -> v2:
>   - fix build error because of prototype mismatch
> 
>   drivers/mempool/dpaa/dpaa_mempool.c | 13 +++++++------
>   1 file changed, 7 insertions(+), 6 deletions(-)
> 
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v3 01/11] mempool: fix memhdr leak when no objects are populated
  2018-03-26 16:09     ` [PATCH v3 01/11] mempool: fix memhdr leak when no objects are populated Andrew Rybchenko
@ 2018-04-06 15:50       ` Olivier Matz
  0 siblings, 0 replies; 197+ messages in thread
From: Olivier Matz @ 2018-04-06 15:50 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev, stable

On Mon, Mar 26, 2018 at 05:09:41PM +0100, Andrew Rybchenko wrote:
> Fixes: 84121f197187 ("mempool: store memory chunks in a list")
> Cc: stable@dpdk.org
> 
> Suggested-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>

Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v3 02/11] mempool: rename flag to control IOVA-contiguous objects
  2018-03-26 16:09     ` [PATCH v3 02/11] mempool: rename flag to control IOVA-contiguous objects Andrew Rybchenko
@ 2018-04-06 15:50       ` Olivier Matz
  0 siblings, 0 replies; 197+ messages in thread
From: Olivier Matz @ 2018-04-06 15:50 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev

On Mon, Mar 26, 2018 at 05:09:42PM +0100, Andrew Rybchenko wrote:
> Flag MEMPOOL_F_NO_PHYS_CONTIG is renamed as MEMPOOL_F_NO_IOVA_CONTIG
> to follow IO memory contiguos terminology.
> MEMPOOL_F_NO_PHYS_CONTIG is kept for backward compatibility and
> deprecated.
> 
> Suggested-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>

Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v3 03/11] mempool: ensure the mempool is initialized before populating
  2018-03-26 16:09     ` [PATCH v3 03/11] mempool: ensure the mempool is initialized before populating Andrew Rybchenko
  2018-04-04 15:06       ` santosh
@ 2018-04-06 15:50       ` Olivier Matz
  1 sibling, 0 replies; 197+ messages in thread
From: Olivier Matz @ 2018-04-06 15:50 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev, Artem V. Andreev

On Mon, Mar 26, 2018 at 05:09:43PM +0100, Andrew Rybchenko wrote:
> From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>
> 
> Callback to calculate required memory area size may require mempool
> driver data to be already allocated and initialized.
> 
> Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>

Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v3 04/11] mempool: add op to calculate memory size to be allocated
  2018-03-26 16:09     ` [PATCH v3 04/11] mempool: add op to calculate memory size to be allocated Andrew Rybchenko
  2018-04-04 15:08       ` santosh
@ 2018-04-06 15:51       ` Olivier Matz
  2018-04-12 15:22       ` Burakov, Anatoly
  2 siblings, 0 replies; 197+ messages in thread
From: Olivier Matz @ 2018-04-06 15:51 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev

On Mon, Mar 26, 2018 at 05:09:44PM +0100, Andrew Rybchenko wrote:
> Size of memory chunk required to populate mempool objects depends
> on how objects are stored in the memory. Different mempool drivers
> may have different requirements and a new operation allows to
> calculate memory size in accordance with driver requirements and
> advertise requirements on minimum memory chunk size and alignment
> in a generic way.
> 
> Bump ABI version since the patch breaks it.
> 
> Suggested-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>

Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v3 05/11] mempool: add op to populate objects using provided memory
  2018-03-26 16:09     ` [PATCH v3 05/11] mempool: add op to populate objects using provided memory Andrew Rybchenko
  2018-04-04 15:09       ` santosh
@ 2018-04-06 15:51       ` Olivier Matz
  1 sibling, 0 replies; 197+ messages in thread
From: Olivier Matz @ 2018-04-06 15:51 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev

On Mon, Mar 26, 2018 at 05:09:45PM +0100, Andrew Rybchenko wrote:
> The callback allows to customize how objects are stored in the
> memory chunk. Default implementation of the callback which simply
> puts objects one by one is available.
> 
> Suggested-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>

Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v3 06/11] mempool: remove callback to get capabilities
  2018-03-26 16:09     ` [PATCH v3 06/11] mempool: remove callback to get capabilities Andrew Rybchenko
  2018-04-04 15:10       ` santosh
@ 2018-04-06 15:51       ` Olivier Matz
  1 sibling, 0 replies; 197+ messages in thread
From: Olivier Matz @ 2018-04-06 15:51 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev, Santosh Shukla, Jerin Jacob

On Mon, Mar 26, 2018 at 05:09:46PM +0100, Andrew Rybchenko wrote:
> The callback was introduced to let generic code to know octeontx
> mempool driver requirements to use single physically contiguous
> memory chunk to store all objects and align object address to
> total object size. Now these requirements are met using a new
> callbacks to calculate required memory chunk size and to populate
> objects using provided memory chunk.
> 
> These capability flags are not used anywhere else.
> 
> Restricting capabilities to flags is not generic and likely to
> be insufficient to describe mempool driver features. If required
> in the future, API which returns structured information may be
> added.
> 
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>

Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v3 07/11] mempool: deprecate xmem functions
  2018-03-26 16:09     ` [PATCH v3 07/11] mempool: deprecate xmem functions Andrew Rybchenko
@ 2018-04-06 15:52       ` Olivier Matz
  0 siblings, 0 replies; 197+ messages in thread
From: Olivier Matz @ 2018-04-06 15:52 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev, Thomas Monjalon

On Mon, Mar 26, 2018 at 05:09:47PM +0100, Andrew Rybchenko wrote:
> Move rte_mempool_xmem_size() code to internal helper function
> since it is required in two places: deprecated rte_mempool_xmem_size()
> and non-deprecated rte_mempool_op_calc_mem_size_default().
> 
> Suggested-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>

Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v3 10/11] mempool: remove callback to register memory area
  2018-03-26 16:09     ` [PATCH v3 10/11] mempool: remove callback to register memory area Andrew Rybchenko
  2018-04-04 15:13       ` santosh
@ 2018-04-06 15:52       ` Olivier Matz
  1 sibling, 0 replies; 197+ messages in thread
From: Olivier Matz @ 2018-04-06 15:52 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev

On Mon, Mar 26, 2018 at 05:09:50PM +0100, Andrew Rybchenko wrote:
> The callback is not required any more since there is a new callback
> to populate objects using provided memory area which provides
> the same information.
> 
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>

Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v3 11/11] mempool: support flushing the default cache of the mempool
  2018-03-26 16:09     ` [PATCH v3 11/11] mempool: support flushing the default cache of the mempool Andrew Rybchenko
@ 2018-04-06 15:53       ` Olivier Matz
  0 siblings, 0 replies; 197+ messages in thread
From: Olivier Matz @ 2018-04-06 15:53 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev, Artem V. Andreev

On Mon, Mar 26, 2018 at 05:09:51PM +0100, Andrew Rybchenko wrote:
> From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>
> 
> Mempool get/put API cares about cache itself, but sometimes it is
> required to flush the cache explicitly.
> 
> The function is moved in the file since it now requires
> rte_mempool_default_cache().
> 
> Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>

Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v3 04/11] mempool: add op to calculate memory size to be allocated
  2018-03-26 16:09     ` [PATCH v3 04/11] mempool: add op to calculate memory size to be allocated Andrew Rybchenko
  2018-04-04 15:08       ` santosh
  2018-04-06 15:51       ` Olivier Matz
@ 2018-04-12 15:22       ` Burakov, Anatoly
  2 siblings, 0 replies; 197+ messages in thread
From: Burakov, Anatoly @ 2018-04-12 15:22 UTC (permalink / raw)
  To: Andrew Rybchenko, dev; +Cc: Olivier MATZ

On 26-Mar-18 5:09 PM, Andrew Rybchenko wrote:
> Size of memory chunk required to populate mempool objects depends
> on how objects are stored in the memory. Different mempool drivers
> may have different requirements and a new operation allows to
> calculate memory size in accordance with driver requirements and
> advertise requirements on minimum memory chunk size and alignment
> in a generic way.
> 
> Bump ABI version since the patch breaks it.
> 
> Suggested-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> ---

Hi Andrew,

<...>

> -	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
>   	for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) {
> -		size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift,
> -						mp->flags);
> +		size_t min_chunk_size;
> +
> +		mem_size = rte_mempool_ops_calc_mem_size(mp, n, pg_shift,
> +				&min_chunk_size, &align);
> +		if (mem_size < 0) {
> +			ret = mem_size;
> +			goto fail;
> +		}
>   
>   		ret = snprintf(mz_name, sizeof(mz_name),
>   			RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id);
> @@ -606,7 +600,7 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>   			goto fail;
>   		}
>   
> -		mz = rte_memzone_reserve_aligned(mz_name, size,
> +		mz = rte_memzone_reserve_aligned(mz_name, mem_size,
>   			mp->socket_id, mz_flags, align);
>   		/* not enough memory, retry with the biggest zone we have */
>   		if (mz == NULL)
> @@ -617,6 +611,12 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>   			goto fail;
>   		}
>   
> +		if (mz->len < min_chunk_size) {
> +			rte_memzone_free(mz);
> +			ret = -ENOMEM;
> +			goto fail;
> +		}
> +
>   		if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG)
>   			iova = RTE_BAD_IOVA;

OK by me, but needs to be rebased.

>   		else
> @@ -649,13 +649,14 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>   static size_t
>   get_anon_size(const struct rte_mempool *mp)
>   {
> -	size_t size, total_elt_sz, pg_sz, pg_shift;
> +	size_t size, pg_sz, pg_shift;
> +	size_t min_chunk_size;
> +	size_t align;
>   
>   	pg_sz = getpagesize();

<...>

>   
> +/**
> + * Calculate memory size required to store given number of objects.
> + *
> + * If mempool objects are not required to be IOVA-contiguous
> + * (the flag MEMPOOL_F_NO_IOVA_CONTIG is set), min_chunk_size defines
> + * virtually contiguous chunk size. Otherwise, if mempool objects must
> + * be IOVA-contiguous (the flag MEMPOOL_F_NO_IOVA_CONTIG is clear),
> + * min_chunk_size defines IOVA-contiguous chunk size.
> + *
> + * @param[in] mp
> + *   Pointer to the memory pool.
> + * @param[in] obj_num
> + *   Number of objects.
> + * @param[in] pg_shift
> + *   LOG2 of the physical pages size. If set to 0, ignore page boundaries.
> + * @param[out] min_chunk_size
> + *   Location for minimum size of the memory chunk which may be used to
> + *   store memory pool objects.
> + * @param[out] align
> + *   Location for required memory chunk alignment.
> + * @return
> + *   Required memory size aligned at page boundary.
> + */
> +typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp,
> +		uint32_t obj_num,  uint32_t pg_shift,
> +		size_t *min_chunk_size, size_t *align);
> +
> +/**
> + * Default way to calculate memory size required to store given number of
> + * objects.
> + *
> + * If page boundaries may be ignored, it is just a product of total
> + * object size including header and trailer and number of objects.
> + * Otherwise, it is a number of pages required to store given number of
> + * objects without crossing page boundary.
> + *
> + * Note that if object size is bigger than page size, then it assumes
> + * that pages are grouped in subsets of physically continuous pages big
> + * enough to store at least one object.
> + *
> + * If mempool driver requires object addresses to be block size aligned
> + * (MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS), space for one extra element is
> + * reserved to be able to meet the requirement.
> + *
> + * Minimum size of memory chunk is either all required space, if
> + * capabilities say that whole memory area must be physically contiguous
> + * (MEMPOOL_F_CAPA_PHYS_CONTIG), or a maximum of the page size and total
> + * element size.
> + *
> + * Required memory chunk alignment is a maximum of page size and cache
> + * line size.
> + */
> +ssize_t rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
> +		uint32_t obj_num, uint32_t pg_shift,
> +		size_t *min_chunk_size, size_t *align);

For API docs and wording,

Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>

Should be pretty straightforward to rebase, so you probably should keep 
my ack for v4.

-- 
Thanks,
Anatoly

^ permalink raw reply	[flat|nested] 197+ messages in thread

* [PATCH v4 00/11] mempool: prepare to add bucket driver
  2017-11-24 16:06 [RFC PATCH 0/6] mempool: add bucket mempool driver Andrew Rybchenko
                   ` (7 preceding siblings ...)
  2018-01-23 13:15 ` [RFC v2 00/17] " Andrew Rybchenko
@ 2018-04-16 13:24 ` Andrew Rybchenko
  2018-04-16 13:24   ` [PATCH v4 01/11] mempool: fix memhdr leak when no objects are populated Andrew Rybchenko
                     ` (11 more replies)
  2018-04-16 13:33 ` [PATCH v2 0/6] mempool: " Andrew Rybchenko
                   ` (2 subsequent siblings)
  11 siblings, 12 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-16 13:24 UTC (permalink / raw)
  To: dev
  Cc: Olivier MATZ, Thomas Monjalon, Anatoly Burakov, Santosh Shukla,
	Jerin Jacob, Hemant Agrawal, Shreyansh Jain

The initial patch series [1] is split into two to simplify processing.
The second series relies on this one and will add bucket mempool driver
and related ops.

The patch series has generic enhancements suggested by Olivier.
Basically it adds driver callbacks to calculate required memory size and
to populate objects using provided memory area. It allows to remove
so-called capability flags used before to tell generic code how to
allocate and slice allocated memory into mempool objects.
Clean up which removes get_capabilities and register_memory_area is
not strictly required, but I think right thing to do.
Existing mempool drivers are updated.

rte_mempool_populate_iova_tab() is also deprecated in v2 as agreed in [2].
Unfortunately it requires addition of -Wno-deprecated-declarations flag
to librte_mempool since the function is used by deprecated earlier
rte_mempool_populate_phys_tab(). If the later may be removed in the
release, we can avoid addition of the flag to allow usage of deprecated
functions.

A new patch is added to the series in v3 to rename MEMPOOL_F_NO_PHYS_CONTIG
as MEMPOOL_F_NO_IOVA_CONTIG as agreed in [3].
MEMPOOL_F_CAPA_PHYS_CONTIG is not renamed since it removed in this
patchset.

It breaks ABI since changes rte_mempool_ops. Also it removes
rte_mempool_ops_register_memory_area() and
rte_mempool_ops_get_capabilities() since corresponding callbacks are
removed.

Internal global functions are not listed in map file since it is not
a part of external API.

[1] https://dpdk.org/ml/archives/dev/2018-January/088698.html
[2] https://dpdk.org/ml/archives/dev/2018-March/093186.html
[3] https://dpdk.org/ml/archives/dev/2018-March/093345.html

v3 -> v4:
  - rebase on memory rework

v2 -> v3:
  - fix build error in mempool/dpaa: prepare to remove register memory area op

v1 -> v2:
  - deprecate rte_mempool_populate_iova_tab()
  - add patch to fix memory leak if no objects are populated
  - add patch to rename MEMPOOL_F_NO_PHYS_CONTIG
  - minor fixes (typos, blank line at the end of file)
  - highlight meaning of min_chunk_size (when it is virtual or
    physical contiguous)
  - make sure that mempool is initialized in rte_mempool_populate_anon()
  - move patch to ensure that mempool is initialized earlier in the series

RFCv2 -> v1:
  - split the series in two
  - squash octeontx patches which implement calc_mem_size and populate
    callbacks into the patch which removes get_capabilities since it is
    the easiest way to untangle the tangle of tightly related library
    functions and flags advertised by the driver
  - consistently name default callbacks
  - move default callbacks to dedicated file
  - see detailed description in patches

RFCv1 -> RFCv2:
  - add driver ops to calculate required memory size and populate
    mempool objects, remove extra flags which were required before
    to control it
  - transition of octeontx and dpaa drivers to the new callbacks
  - change info API to get information from driver required to
    API user to know contiguous block size
  - remove get_capabilities (not required any more and may be
    substituted with more in info get API)
  - remove register_memory_area since it is substituted with
    populate callback which can do more
  - use SPDX tags
  - avoid all objects affinity to single lcore
  - fix bucket get_count
  - deprecate XMEM API
  - avoid introduction of a new function to flush cache
  - fix NO_CACHE_ALIGN case in bucket mempool


Andrew Rybchenko (9):
  mempool: fix memhdr leak when no objects are populated
  mempool: rename flag to control IOVA-contiguous objects
  mempool: add op to calculate memory size to be allocated
  mempool: add op to populate objects using provided memory
  mempool: remove callback to get capabilities
  mempool: deprecate xmem functions
  mempool/octeontx: prepare to remove register memory area op
  mempool/dpaa: prepare to remove register memory area op
  mempool: remove callback to register memory area

Artem V. Andreev (2):
  mempool: ensure the mempool is initialized before populating
  mempool: support flushing the default cache of the mempool

 doc/guides/rel_notes/deprecation.rst            |  12 +-
 doc/guides/rel_notes/release_18_05.rst          |  34 ++-
 drivers/mempool/dpaa/dpaa_mempool.c             |  13 +-
 drivers/mempool/octeontx/rte_mempool_octeontx.c |  64 ++++--
 drivers/net/thunderx/nicvf_ethdev.c             |   2 +-
 lib/librte_mempool/Makefile                     |   6 +-
 lib/librte_mempool/meson.build                  |  17 +-
 lib/librte_mempool/rte_mempool.c                | 240 ++++++++++----------
 lib/librte_mempool/rte_mempool.h                | 280 +++++++++++++++++-------
 lib/librte_mempool/rte_mempool_ops.c            |  37 ++--
 lib/librte_mempool/rte_mempool_ops_default.c    |  51 +++++
 lib/librte_mempool/rte_mempool_version.map      |  10 +-
 test/test/test_mempool.c                        |  31 ---
 13 files changed, 528 insertions(+), 269 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_ops_default.c

-- 
2.7.4

^ permalink raw reply	[flat|nested] 197+ messages in thread

* [PATCH v4 01/11] mempool: fix memhdr leak when no objects are populated
  2018-04-16 13:24 ` [PATCH v4 00/11] mempool: prepare to " Andrew Rybchenko
@ 2018-04-16 13:24   ` Andrew Rybchenko
  2018-04-16 13:24   ` [PATCH v4 02/11] mempool: rename flag to control IOVA-contiguous objects Andrew Rybchenko
                     ` (10 subsequent siblings)
  11 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-16 13:24 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, stable

Fixes: 84121f197187 ("mempool: store memory chunks in a list")
Cc: stable@dpdk.org

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
v3 -> v4:
 - none

v2 -> v3:
 - none

v1 -> v2:
 - added in v2 as discussed in [1]

[1] https://dpdk.org/ml/archives/dev/2018-March/093329.html

 lib/librte_mempool/rte_mempool.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 103c015..3b31a55 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -421,12 +421,18 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	}
 
 	/* not enough room to store one object */
-	if (i == 0)
-		return -EINVAL;
+	if (i == 0) {
+		ret = -EINVAL;
+		goto fail;
+	}
 
 	STAILQ_INSERT_TAIL(&mp->mem_list, memhdr, next);
 	mp->nb_mem_chunks++;
 	return i;
+
+fail:
+	rte_free(memhdr);
+	return ret;
 }
 
 int
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v4 02/11] mempool: rename flag to control IOVA-contiguous objects
  2018-04-16 13:24 ` [PATCH v4 00/11] mempool: prepare to " Andrew Rybchenko
  2018-04-16 13:24   ` [PATCH v4 01/11] mempool: fix memhdr leak when no objects are populated Andrew Rybchenko
@ 2018-04-16 13:24   ` Andrew Rybchenko
  2018-04-16 13:24   ` [PATCH v4 03/11] mempool: ensure the mempool is initialized before populating Andrew Rybchenko
                     ` (9 subsequent siblings)
  11 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-16 13:24 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ

Flag MEMPOOL_F_NO_PHYS_CONTIG is renamed as MEMPOOL_F_NO_IOVA_CONTIG
to follow IO memory contiguos terminology.
MEMPOOL_F_NO_PHYS_CONTIG is kept for backward compatibility and
deprecated.

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
v3 -> v4:
 - trivial rebase

v2 -> v3:
 - none

v1 -> v2:
 - added in v2 as discussed in [1]

[1] https://dpdk.org/ml/archives/dev/2018-March/093345.html

 drivers/net/thunderx/nicvf_ethdev.c | 2 +-
 lib/librte_mempool/rte_mempool.c    | 6 +++---
 lib/librte_mempool/rte_mempool.h    | 9 +++++----
 3 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 75e9d16..8f6b0b6 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1308,7 +1308,7 @@ nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 	}
 
 	/* Mempool memory must be physically contiguous */
-	if (mp->flags & MEMPOOL_F_NO_PHYS_CONTIG) {
+	if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG) {
 		PMD_INIT_LOG(ERR, "Mempool memory must be physically contiguous");
 		return -EINVAL;
 	}
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 3b31a55..d9c09e1 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -459,7 +459,7 @@ rte_mempool_populate_iova_tab(struct rte_mempool *mp, char *vaddr,
 	if (mp->nb_mem_chunks != 0)
 		return -EEXIST;
 
-	if (mp->flags & MEMPOOL_F_NO_PHYS_CONTIG)
+	if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG)
 		return rte_mempool_populate_iova(mp, vaddr, RTE_BAD_IOVA,
 			pg_num * pg_sz, free_cb, opaque);
 
@@ -513,7 +513,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 	if (RTE_ALIGN_CEIL(len, pg_sz) != len)
 		return -EINVAL;
 
-	if (mp->flags & MEMPOOL_F_NO_PHYS_CONTIG)
+	if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG)
 		return rte_mempool_populate_iova(mp, addr, RTE_BAD_IOVA,
 			len, free_cb, opaque);
 
@@ -583,7 +583,7 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	/* update mempool capabilities */
 	mp->flags |= mp_flags;
 
-	no_contig = mp->flags & MEMPOOL_F_NO_PHYS_CONTIG;
+	no_contig = mp->flags & MEMPOOL_F_NO_IOVA_CONTIG;
 	force_contig = mp->flags & MEMPOOL_F_CAPA_PHYS_CONTIG;
 
 	/*
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 8b1b7f7..e531a15 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -244,7 +244,8 @@ struct rte_mempool {
 #define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
-#define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
+#define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 /**< Don't need IOVA contiguous objs. */
+#define MEMPOOL_F_NO_PHYS_CONTIG MEMPOOL_F_NO_IOVA_CONTIG /* deprecated */
 /**
  * This capability flag is advertised by a mempool handler, if the whole
  * memory area containing the objects must be physically contiguous.
@@ -710,8 +711,8 @@ typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *);
  *   - MEMPOOL_F_SC_GET: If this flag is set, the default behavior
  *     when using rte_mempool_get() or rte_mempool_get_bulk() is
  *     "single-consumer". Otherwise, it is "multi-consumers".
- *   - MEMPOOL_F_NO_PHYS_CONTIG: If set, allocated objects won't
- *     necessarily be contiguous in physical memory.
+ *   - MEMPOOL_F_NO_IOVA_CONTIG: If set, allocated objects won't
+ *     necessarily be contiguous in IO memory.
  * @return
  *   The pointer to the new allocated mempool, on success. NULL on error
  *   with rte_errno set appropriately. Possible rte_errno values include:
@@ -1439,7 +1440,7 @@ rte_mempool_empty(const struct rte_mempool *mp)
  *   A pointer (virtual address) to the element of the pool.
  * @return
  *   The IO address of the elt element.
- *   If the mempool was created with MEMPOOL_F_NO_PHYS_CONTIG, the
+ *   If the mempool was created with MEMPOOL_F_NO_IOVA_CONTIG, the
  *   returned value is RTE_BAD_IOVA.
  */
 static inline rte_iova_t
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v4 03/11] mempool: ensure the mempool is initialized before populating
  2018-04-16 13:24 ` [PATCH v4 00/11] mempool: prepare to " Andrew Rybchenko
  2018-04-16 13:24   ` [PATCH v4 01/11] mempool: fix memhdr leak when no objects are populated Andrew Rybchenko
  2018-04-16 13:24   ` [PATCH v4 02/11] mempool: rename flag to control IOVA-contiguous objects Andrew Rybchenko
@ 2018-04-16 13:24   ` Andrew Rybchenko
  2018-04-16 13:24   ` [PATCH v4 04/11] mempool: add op to calculate memory size to be allocated Andrew Rybchenko
                     ` (8 subsequent siblings)
  11 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-16 13:24 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Callback to calculate required memory area size may require mempool
driver data to be already allocated and initialized.

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Santosh Shukla <Santosh.Shukla@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
v3 -> v4:
 - rebase

v2 -> v3:
 - none

v1 -> v2:
 - add init check to mempool_ops_alloc_once()
 - move ealier in the patch series since it is required when driver
   ops are called and it is better to have it before new ops are added

RFCv2 -> v1:
 - rename helper function as mempool_ops_alloc_once()

 lib/librte_mempool/rte_mempool.c | 33 ++++++++++++++++++++++++++-------
 1 file changed, 26 insertions(+), 7 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index d9c09e1..b15b79b 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -346,6 +346,21 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
 	}
 }
 
+static int
+mempool_ops_alloc_once(struct rte_mempool *mp)
+{
+	int ret;
+
+	/* create the internal ring if not already done */
+	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
+		ret = rte_mempool_ops_alloc(mp);
+		if (ret != 0)
+			return ret;
+		mp->flags |= MEMPOOL_F_POOL_CREATED;
+	}
+	return 0;
+}
+
 /* Add objects in the pool, using a physically contiguous memory
  * zone. Return the number of objects added, or a negative value
  * on error.
@@ -362,13 +377,9 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	struct rte_mempool_memhdr *memhdr;
 	int ret;
 
-	/* create the internal ring if not already done */
-	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
-		ret = rte_mempool_ops_alloc(mp);
-		if (ret != 0)
-			return ret;
-		mp->flags |= MEMPOOL_F_POOL_CREATED;
-	}
+	ret = mempool_ops_alloc_once(mp);
+	if (ret != 0)
+		return ret;
 
 	/* Notify memory area to mempool */
 	ret = rte_mempool_ops_register_memory_area(mp, vaddr, iova, len);
@@ -570,6 +581,10 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	int ret;
 	bool force_contig, no_contig, try_contig, no_pageshift;
 
+	ret = mempool_ops_alloc_once(mp);
+	if (ret != 0)
+		return ret;
+
 	/* mempool must not be populated */
 	if (mp->nb_mem_chunks != 0)
 		return -EEXIST;
@@ -774,6 +789,10 @@ rte_mempool_populate_anon(struct rte_mempool *mp)
 		return 0;
 	}
 
+	ret = mempool_ops_alloc_once(mp);
+	if (ret != 0)
+		return ret;
+
 	/* get chunk of virtually continuous memory */
 	size = get_anon_size(mp);
 	addr = mmap(NULL, size, PROT_READ | PROT_WRITE,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v4 04/11] mempool: add op to calculate memory size to be allocated
  2018-04-16 13:24 ` [PATCH v4 00/11] mempool: prepare to " Andrew Rybchenko
                     ` (2 preceding siblings ...)
  2018-04-16 13:24   ` [PATCH v4 03/11] mempool: ensure the mempool is initialized before populating Andrew Rybchenko
@ 2018-04-16 13:24   ` Andrew Rybchenko
  2018-04-16 15:33     ` Olivier Matz
  2018-04-17 10:23     ` Burakov, Anatoly
  2018-04-16 13:24   ` [PATCH v4 05/11] mempool: add op to populate objects using provided memory Andrew Rybchenko
                     ` (7 subsequent siblings)
  11 siblings, 2 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-16 13:24 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Anatoly Burakov

Size of memory chunk required to populate mempool objects depends
on how objects are stored in the memory. Different mempool drivers
may have different requirements and a new operation allows to
calculate memory size in accordance with driver requirements and
advertise requirements on minimum memory chunk size and alignment
in a generic way.

Bump ABI version since the patch breaks it.

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
v3 -> v4:
 - rebased on top of memory rework
 - dropped previous Ack's since rebase is not trivial
 - check size calculation failure in rte_mempool_populate_anon() and
   rte_mempool_memchunk_anon_free()

v2 -> v3:
 - none

v1 -> v2:
 - clarify min_chunk_size meaning
 - rebase on top of patch series which fixes library version in meson
   build

RFCv2 -> v1:
 - move default calc_mem_size callback to rte_mempool_ops_default.c
 - add ABI changes to release notes
 - name default callback consistently: rte_mempool_op_<callback>_default()
 - bump ABI version since it is the first patch which breaks ABI
 - describe default callback behaviour in details
 - avoid introduction of internal function to cope with deprecation
   (keep it to deprecation patch)
 - move cache-line or page boundary chunk alignment to default callback
 - highlight that min_chunk_size and align parameters are output only

 doc/guides/rel_notes/deprecation.rst         |   3 +-
 doc/guides/rel_notes/release_18_05.rst       |   8 +-
 lib/librte_mempool/Makefile                  |   3 +-
 lib/librte_mempool/meson.build               |   5 +-
 lib/librte_mempool/rte_mempool.c             | 114 +++++++++++++++------------
 lib/librte_mempool/rte_mempool.h             |  86 +++++++++++++++++++-
 lib/librte_mempool/rte_mempool_ops.c         |  18 +++++
 lib/librte_mempool/rte_mempool_ops_default.c |  38 +++++++++
 lib/librte_mempool/rte_mempool_version.map   |   7 ++
 9 files changed, 225 insertions(+), 57 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_ops_default.c

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index c929dcc..2aa5ef3 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -60,8 +60,7 @@ Deprecation Notices
 
   - removal of ``get_capabilities`` mempool ops and related flags.
   - substitute ``register_memory_area`` with ``populate`` ops.
-  - addition of new ops to customize required memory chunk calculation,
-    customize objects population and allocate contiguous
+  - addition of new ops to customize objects population and allocate contiguous
     block of objects if underlying driver supports it.
 
 * mbuf: The opaque ``mbuf->hash.sched`` field will be updated to support generic
diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index 84295e4..7dbe7ac 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -195,6 +195,12 @@ ABI Changes
   type ``uint16_t``: ``burst_size``, ``ring_size``, and ``nb_queues``. These
   are parameter values recommended for use by the PMD.
 
+* **Changed rte_mempool_ops structure.**
+
+  A new callback ``calc_mem_size`` has been added to ``rte_mempool_ops``
+  to allow to customize required memory size calculation.
+
+
 Removed Items
 -------------
 
@@ -267,7 +273,7 @@ The libraries prepended with a plus sign were incremented in this version.
      librte_latencystats.so.1
      librte_lpm.so.2
    + librte_mbuf.so.4
-     librte_mempool.so.3
+   + librte_mempool.so.4
    + librte_meter.so.2
      librte_metrics.so.1
      librte_net.so.1
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 1f85d34..421e2a7 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -11,7 +11,7 @@ LDLIBS += -lrte_eal -lrte_ring
 
 EXPORT_MAP := rte_mempool_version.map
 
-LIBABIVER := 3
+LIBABIVER := 4
 
 # memseg walk is not yet part of stable API
 CFLAGS += -DALLOW_EXPERIMENTAL_API
@@ -19,6 +19,7 @@ CFLAGS += -DALLOW_EXPERIMENTAL_API
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops_default.c
 # install includes
 SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
 
diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build
index 89506c5..6181ad8 100644
--- a/lib/librte_mempool/meson.build
+++ b/lib/librte_mempool/meson.build
@@ -1,8 +1,9 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2017 Intel Corporation
 
-version = 3
-sources = files('rte_mempool.c', 'rte_mempool_ops.c')
+version = 4
+sources = files('rte_mempool.c', 'rte_mempool_ops.c',
+		'rte_mempool_ops_default.c')
 headers = files('rte_mempool.h')
 deps += ['ring']
 
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index b15b79b..fdcee05 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -574,12 +574,12 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	unsigned int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	const struct rte_memzone *mz;
-	size_t size, total_elt_sz, align, pg_sz, pg_shift;
+	ssize_t mem_size;
+	size_t align, pg_sz, pg_shift;
 	rte_iova_t iova;
 	unsigned mz_id, n;
-	unsigned int mp_flags;
 	int ret;
-	bool force_contig, no_contig, try_contig, no_pageshift;
+	bool no_contig, try_contig, no_pageshift;
 
 	ret = mempool_ops_alloc_once(mp);
 	if (ret != 0)
@@ -589,22 +589,12 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	if (mp->nb_mem_chunks != 0)
 		return -EEXIST;
 
-	/* Get mempool capabilities */
-	mp_flags = 0;
-	ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
-	if ((ret < 0) && (ret != -ENOTSUP))
-		return ret;
-
-	/* update mempool capabilities */
-	mp->flags |= mp_flags;
-
 	no_contig = mp->flags & MEMPOOL_F_NO_IOVA_CONTIG;
-	force_contig = mp->flags & MEMPOOL_F_CAPA_PHYS_CONTIG;
 
 	/*
 	 * the following section calculates page shift and page size values.
 	 *
-	 * these values impact the result of rte_mempool_xmem_size(), which
+	 * these values impact the result of calc_mem_size operation, which
 	 * returns the amount of memory that should be allocated to store the
 	 * desired number of objects. when not zero, it allocates more memory
 	 * for the padding between objects, to ensure that an object does not
@@ -625,7 +615,7 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	 *
 	 * if our IO addresses are virtual, not actual physical (IOVA as VA
 	 * case), then no page shift needed - our memory allocation will give us
-	 * contiguous physical memory as far as the hardware is concerned, so
+	 * contiguous IO memory as far as the hardware is concerned, so
 	 * act as if we're getting contiguous memory.
 	 *
 	 * if our IO addresses are physical, we may get memory from bigger
@@ -643,39 +633,35 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	 * 1G page on a 10MB memzone). If we fail to get enough contiguous
 	 * memory, then we'll go and reserve space page-by-page.
 	 */
-	no_pageshift = no_contig || force_contig ||
-			rte_eal_iova_mode() == RTE_IOVA_VA;
+	no_pageshift = no_contig || rte_eal_iova_mode() == RTE_IOVA_VA;
 	try_contig = !no_contig && !no_pageshift && rte_eal_has_hugepages();
-	if (force_contig)
-		mz_flags |= RTE_MEMZONE_IOVA_CONTIG;
 
 	if (no_pageshift) {
 		pg_sz = 0;
 		pg_shift = 0;
-		align = RTE_CACHE_LINE_SIZE;
 	} else if (try_contig) {
 		pg_sz = get_min_page_size();
 		pg_shift = rte_bsf32(pg_sz);
-		/* we're trying to reserve contiguous memzone first, so try
-		 * align to cache line; if we fail to reserve a contiguous
-		 * memzone, we'll adjust alignment to equal pagesize later.
-		 */
-		align = RTE_CACHE_LINE_SIZE;
 	} else {
 		pg_sz = getpagesize();
 		pg_shift = rte_bsf32(pg_sz);
-		align = pg_sz;
 	}
 
-	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 	for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) {
+		size_t min_chunk_size;
 		unsigned int flags;
+
 		if (try_contig || no_pageshift)
-			size = rte_mempool_xmem_size(n, total_elt_sz, 0,
-				mp->flags);
+			mem_size = rte_mempool_ops_calc_mem_size(mp, n,
+					0, &min_chunk_size, &align);
 		else
-			size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift,
-				mp->flags);
+			mem_size = rte_mempool_ops_calc_mem_size(mp, n,
+					pg_shift, &min_chunk_size, &align);
+
+		if (mem_size < 0) {
+			ret = mem_size;
+			goto fail;
+		}
 
 		ret = snprintf(mz_name, sizeof(mz_name),
 			RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id);
@@ -692,27 +678,31 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 		if (try_contig)
 			flags |= RTE_MEMZONE_IOVA_CONTIG;
 
-		mz = rte_memzone_reserve_aligned(mz_name, size, mp->socket_id,
-				flags, align);
+		mz = rte_memzone_reserve_aligned(mz_name, mem_size,
+				mp->socket_id, flags, align);
 
-		/* if we were trying to allocate contiguous memory, adjust
-		 * memzone size and page size to fit smaller page sizes, and
-		 * try again.
+		/* if we were trying to allocate contiguous memory, failed and
+		 * minimum required contiguous chunk fits minimum page, adjust
+		 * memzone size to the page size, and try again.
 		 */
-		if (mz == NULL && try_contig) {
+		if (mz == NULL && try_contig && min_chunk_size <= pg_sz) {
 			try_contig = false;
 			flags &= ~RTE_MEMZONE_IOVA_CONTIG;
-			align = pg_sz;
-			size = rte_mempool_xmem_size(n, total_elt_sz,
-				pg_shift, mp->flags);
 
-			mz = rte_memzone_reserve_aligned(mz_name, size,
+			mem_size = rte_mempool_ops_calc_mem_size(mp, n,
+					pg_shift, &min_chunk_size, &align);
+			if (mem_size < 0) {
+				ret = mem_size;
+				goto fail;
+			}
+
+			mz = rte_memzone_reserve_aligned(mz_name, mem_size,
 				mp->socket_id, flags, align);
 		}
 		/* don't try reserving with 0 size if we were asked to reserve
 		 * IOVA-contiguous memory.
 		 */
-		if (!force_contig && mz == NULL) {
+		if (min_chunk_size < (size_t)mem_size && mz == NULL) {
 			/* not enough memory, retry with the biggest zone we
 			 * have
 			 */
@@ -724,6 +714,12 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 			goto fail;
 		}
 
+		if (mz->len < min_chunk_size) {
+			rte_memzone_free(mz);
+			ret = -ENOMEM;
+			goto fail;
+		}
+
 		if (no_contig)
 			iova = RTE_BAD_IOVA;
 		else
@@ -753,16 +749,18 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 }
 
 /* return the memory size required for mempool objects in anonymous mem */
-static size_t
+static ssize_t
 get_anon_size(const struct rte_mempool *mp)
 {
-	size_t size, total_elt_sz, pg_sz, pg_shift;
+	ssize_t size;
+	size_t pg_sz, pg_shift;
+	size_t min_chunk_size;
+	size_t align;
 
 	pg_sz = getpagesize();
 	pg_shift = rte_bsf32(pg_sz);
-	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
-	size = rte_mempool_xmem_size(mp->size, total_elt_sz, pg_shift,
-					mp->flags);
+	size = rte_mempool_ops_calc_mem_size(mp, mp->size, pg_shift,
+					     &min_chunk_size, &align);
 
 	return size;
 }
@@ -772,14 +770,25 @@ static void
 rte_mempool_memchunk_anon_free(struct rte_mempool_memhdr *memhdr,
 	void *opaque)
 {
-	munmap(opaque, get_anon_size(memhdr->mp));
+	ssize_t size;
+
+	/*
+	 * Calculate size since memhdr->len has contiguous chunk length
+	 * which may be smaller if anon map is split into many contiguous
+	 * chunks. Result must be the same as we calculated on populate.
+	 */
+	size = get_anon_size(memhdr->mp);
+	if (size < 0)
+		return;
+
+	munmap(opaque, size);
 }
 
 /* populate the mempool with an anonymous mapping */
 int
 rte_mempool_populate_anon(struct rte_mempool *mp)
 {
-	size_t size;
+	ssize_t size;
 	int ret;
 	char *addr;
 
@@ -793,8 +802,13 @@ rte_mempool_populate_anon(struct rte_mempool *mp)
 	if (ret != 0)
 		return ret;
 
-	/* get chunk of virtually continuous memory */
 	size = get_anon_size(mp);
+	if (size < 0) {
+		rte_errno = -size;
+		return 0;
+	}
+
+	/* get chunk of virtually continuous memory */
 	addr = mmap(NULL, size, PROT_READ | PROT_WRITE,
 		MAP_SHARED | MAP_ANONYMOUS, -1, 0);
 	if (addr == MAP_FAILED) {
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index e531a15..191255d 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -400,6 +400,62 @@ typedef int (*rte_mempool_get_capabilities_t)(const struct rte_mempool *mp,
 typedef int (*rte_mempool_ops_register_memory_area_t)
 (const struct rte_mempool *mp, char *vaddr, rte_iova_t iova, size_t len);
 
+/**
+ * Calculate memory size required to store given number of objects.
+ *
+ * If mempool objects are not required to be IOVA-contiguous
+ * (the flag MEMPOOL_F_NO_IOVA_CONTIG is set), min_chunk_size defines
+ * virtually contiguous chunk size. Otherwise, if mempool objects must
+ * be IOVA-contiguous (the flag MEMPOOL_F_NO_IOVA_CONTIG is clear),
+ * min_chunk_size defines IOVA-contiguous chunk size.
+ *
+ * @param[in] mp
+ *   Pointer to the memory pool.
+ * @param[in] obj_num
+ *   Number of objects.
+ * @param[in] pg_shift
+ *   LOG2 of the physical pages size. If set to 0, ignore page boundaries.
+ * @param[out] min_chunk_size
+ *   Location for minimum size of the memory chunk which may be used to
+ *   store memory pool objects.
+ * @param[out] align
+ *   Location for required memory chunk alignment.
+ * @return
+ *   Required memory size aligned at page boundary.
+ */
+typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp,
+		uint32_t obj_num,  uint32_t pg_shift,
+		size_t *min_chunk_size, size_t *align);
+
+/**
+ * Default way to calculate memory size required to store given number of
+ * objects.
+ *
+ * If page boundaries may be ignored, it is just a product of total
+ * object size including header and trailer and number of objects.
+ * Otherwise, it is a number of pages required to store given number of
+ * objects without crossing page boundary.
+ *
+ * Note that if object size is bigger than page size, then it assumes
+ * that pages are grouped in subsets of physically continuous pages big
+ * enough to store at least one object.
+ *
+ * If mempool driver requires object addresses to be block size aligned
+ * (MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS), space for one extra element is
+ * reserved to be able to meet the requirement.
+ *
+ * Minimum size of memory chunk is either all required space, if
+ * capabilities say that whole memory area must be physically contiguous
+ * (MEMPOOL_F_CAPA_PHYS_CONTIG), or a maximum of the page size and total
+ * element size.
+ *
+ * Required memory chunk alignment is a maximum of page size and cache
+ * line size.
+ */
+ssize_t rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
+		uint32_t obj_num, uint32_t pg_shift,
+		size_t *min_chunk_size, size_t *align);
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -416,6 +472,11 @@ struct rte_mempool_ops {
 	 * Notify new memory area to mempool
 	 */
 	rte_mempool_ops_register_memory_area_t register_memory_area;
+	/**
+	 * Optional callback to calculate memory size required to
+	 * store specified number of objects.
+	 */
+	rte_mempool_calc_mem_size_t calc_mem_size;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -565,6 +626,29 @@ rte_mempool_ops_register_memory_area(const struct rte_mempool *mp,
 				char *vaddr, rte_iova_t iova, size_t len);
 
 /**
+ * @internal wrapper for mempool_ops calc_mem_size callback.
+ * API to calculate size of memory required to store specified number of
+ * object.
+ *
+ * @param[in] mp
+ *   Pointer to the memory pool.
+ * @param[in] obj_num
+ *   Number of objects.
+ * @param[in] pg_shift
+ *   LOG2 of the physical pages size. If set to 0, ignore page boundaries.
+ * @param[out] min_chunk_size
+ *   Location for minimum size of the memory chunk which may be used to
+ *   store memory pool objects.
+ * @param[out] align
+ *   Location for required memory chunk alignment.
+ * @return
+ *   Required memory size aligned at page boundary.
+ */
+ssize_t rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
+				      uint32_t obj_num, uint32_t pg_shift,
+				      size_t *min_chunk_size, size_t *align);
+
+/**
  * @internal wrapper for mempool_ops free callback.
  *
  * @param mp
@@ -1534,7 +1618,7 @@ uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  * of objects. Assume that the memory buffer will be aligned at page
  * boundary.
  *
- * Note that if object size is bigger then page size, then it assumes
+ * Note that if object size is bigger than page size, then it assumes
  * that pages are grouped in subsets of physically continuous pages big
  * enough to store at least one object.
  *
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 0732255..26908cc 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -59,6 +59,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->get_count = h->get_count;
 	ops->get_capabilities = h->get_capabilities;
 	ops->register_memory_area = h->register_memory_area;
+	ops->calc_mem_size = h->calc_mem_size;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -123,6 +124,23 @@ rte_mempool_ops_register_memory_area(const struct rte_mempool *mp, char *vaddr,
 	return ops->register_memory_area(mp, vaddr, iova, len);
 }
 
+/* wrapper to notify new memory area to external mempool */
+ssize_t
+rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
+				uint32_t obj_num, uint32_t pg_shift,
+				size_t *min_chunk_size, size_t *align)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+
+	if (ops->calc_mem_size == NULL)
+		return rte_mempool_op_calc_mem_size_default(mp, obj_num,
+				pg_shift, min_chunk_size, align);
+
+	return ops->calc_mem_size(mp, obj_num, pg_shift, min_chunk_size, align);
+}
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c
new file mode 100644
index 0000000..57fe79b
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_ops_default.c
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016 Intel Corporation.
+ * Copyright(c) 2016 6WIND S.A.
+ * Copyright(c) 2018 Solarflare Communications Inc.
+ */
+
+#include <rte_mempool.h>
+
+ssize_t
+rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
+				     uint32_t obj_num, uint32_t pg_shift,
+				     size_t *min_chunk_size, size_t *align)
+{
+	unsigned int mp_flags;
+	int ret;
+	size_t total_elt_sz;
+	size_t mem_size;
+
+	/* Get mempool capabilities */
+	mp_flags = 0;
+	ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
+	if ((ret < 0) && (ret != -ENOTSUP))
+		return ret;
+
+	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
+
+	mem_size = rte_mempool_xmem_size(obj_num, total_elt_sz, pg_shift,
+					 mp->flags | mp_flags);
+
+	if (mp_flags & MEMPOOL_F_CAPA_PHYS_CONTIG)
+		*min_chunk_size = mem_size;
+	else
+		*min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz);
+
+	*align = RTE_MAX((size_t)RTE_CACHE_LINE_SIZE, (size_t)1 << pg_shift);
+
+	return mem_size;
+}
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 62b76f9..cb38189 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -51,3 +51,10 @@ DPDK_17.11 {
 	rte_mempool_populate_iova_tab;
 
 } DPDK_16.07;
+
+DPDK_18.05 {
+	global:
+
+	rte_mempool_op_calc_mem_size_default;
+
+} DPDK_17.11;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v4 05/11] mempool: add op to populate objects using provided memory
  2018-04-16 13:24 ` [PATCH v4 00/11] mempool: prepare to " Andrew Rybchenko
                     ` (3 preceding siblings ...)
  2018-04-16 13:24   ` [PATCH v4 04/11] mempool: add op to calculate memory size to be allocated Andrew Rybchenko
@ 2018-04-16 13:24   ` Andrew Rybchenko
  2018-04-16 13:24   ` [PATCH v4 06/11] mempool: remove callback to get capabilities Andrew Rybchenko
                     ` (6 subsequent siblings)
  11 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-16 13:24 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ

The callback allows to customize how objects are stored in the
memory chunk. Default implementation of the callback which simply
puts objects one by one is available.

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Santosh Shukla <Santosh.Shukla@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
v3 -> v4:
 - none

v2 -> v3:
 - none

v1 -> v2:
 - fix memory leak if off is bigger than len

RFCv2 -> v1:
 - advertise ABI changes in release notes
 - use consistent name for default callback:
   rte_mempool_op_<callback>_default()
 - add opaque data pointer to populated object callback
 - move default callback to dedicated file

 doc/guides/rel_notes/deprecation.rst         |  2 +-
 doc/guides/rel_notes/release_18_05.rst       |  2 +
 lib/librte_mempool/rte_mempool.c             | 23 ++++---
 lib/librte_mempool/rte_mempool.h             | 90 ++++++++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c         | 21 +++++++
 lib/librte_mempool/rte_mempool_ops_default.c | 24 ++++++++
 lib/librte_mempool/rte_mempool_version.map   |  1 +
 7 files changed, 149 insertions(+), 14 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 2aa5ef3..575da18 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -60,7 +60,7 @@ Deprecation Notices
 
   - removal of ``get_capabilities`` mempool ops and related flags.
   - substitute ``register_memory_area`` with ``populate`` ops.
-  - addition of new ops to customize objects population and allocate contiguous
+  - addition of new op to allocate contiguous
     block of objects if underlying driver supports it.
 
 * mbuf: The opaque ``mbuf->hash.sched`` field will be updated to support generic
diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index 7dbe7ac..5c6588e 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -199,6 +199,8 @@ ABI Changes
 
   A new callback ``calc_mem_size`` has been added to ``rte_mempool_ops``
   to allow to customize required memory size calculation.
+  A new callback ``populate`` has been added to ``rte_mempool_ops``
+  to allow to customize objects population.
 
 
 Removed Items
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index fdcee05..68ae12f 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -122,7 +122,8 @@ get_min_page_size(void)
 
 
 static void
-mempool_add_elem(struct rte_mempool *mp, void *obj, rte_iova_t iova)
+mempool_add_elem(struct rte_mempool *mp, __rte_unused void *opaque,
+		 void *obj, rte_iova_t iova)
 {
 	struct rte_mempool_objhdr *hdr;
 	struct rte_mempool_objtlr *tlr __rte_unused;
@@ -139,9 +140,6 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, rte_iova_t iova)
 	tlr = __mempool_get_trailer(obj);
 	tlr->cookie = RTE_MEMPOOL_TRAILER_COOKIE;
 #endif
-
-	/* enqueue in ring */
-	rte_mempool_ops_enqueue_bulk(mp, &obj, 1);
 }
 
 /* call obj_cb() for each mempool element */
@@ -420,17 +418,16 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	else
 		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
 
-	while (off + total_elt_sz <= len && mp->populated_size < mp->size) {
-		off += mp->header_size;
-		if (iova == RTE_BAD_IOVA)
-			mempool_add_elem(mp, (char *)vaddr + off,
-				RTE_BAD_IOVA);
-		else
-			mempool_add_elem(mp, (char *)vaddr + off, iova + off);
-		off += mp->elt_size + mp->trailer_size;
-		i++;
+	if (off > len) {
+		ret = -EINVAL;
+		goto fail;
 	}
 
+	i = rte_mempool_ops_populate(mp, mp->size - mp->populated_size,
+		(char *)vaddr + off,
+		(iova == RTE_BAD_IOVA) ? RTE_BAD_IOVA : (iova + off),
+		len - off, mempool_add_elem, NULL);
+
 	/* not enough room to store one object */
 	if (i == 0) {
 		ret = -EINVAL;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 191255d..754261e 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -456,6 +456,63 @@ ssize_t rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
 		uint32_t obj_num, uint32_t pg_shift,
 		size_t *min_chunk_size, size_t *align);
 
+/**
+ * Function to be called for each populated object.
+ *
+ * @param[in] mp
+ *   A pointer to the mempool structure.
+ * @param[in] opaque
+ *   An opaque pointer passed to iterator.
+ * @param[in] vaddr
+ *   Object virtual address.
+ * @param[in] iova
+ *   Input/output virtual address of the object or RTE_BAD_IOVA.
+ */
+typedef void (rte_mempool_populate_obj_cb_t)(struct rte_mempool *mp,
+		void *opaque, void *vaddr, rte_iova_t iova);
+
+/**
+ * Populate memory pool objects using provided memory chunk.
+ *
+ * Populated objects should be enqueued to the pool, e.g. using
+ * rte_mempool_ops_enqueue_bulk().
+ *
+ * If the given IO address is unknown (iova = RTE_BAD_IOVA),
+ * the chunk doesn't need to be physically contiguous (only virtually),
+ * and allocated objects may span two pages.
+ *
+ * @param[in] mp
+ *   A pointer to the mempool structure.
+ * @param[in] max_objs
+ *   Maximum number of objects to be populated.
+ * @param[in] vaddr
+ *   The virtual address of memory that should be used to store objects.
+ * @param[in] iova
+ *   The IO address
+ * @param[in] len
+ *   The length of memory in bytes.
+ * @param[in] obj_cb
+ *   Callback function to be executed for each populated object.
+ * @param[in] obj_cb_arg
+ *   An opaque pointer passed to the callback function.
+ * @return
+ *   The number of objects added on success.
+ *   On error, no objects are populated and a negative errno is returned.
+ */
+typedef int (*rte_mempool_populate_t)(struct rte_mempool *mp,
+		unsigned int max_objs,
+		void *vaddr, rte_iova_t iova, size_t len,
+		rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg);
+
+/**
+ * Default way to populate memory pool object using provided memory
+ * chunk: just slice objects one by one.
+ */
+int rte_mempool_op_populate_default(struct rte_mempool *mp,
+		unsigned int max_objs,
+		void *vaddr, rte_iova_t iova, size_t len,
+		rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg);
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -477,6 +534,11 @@ struct rte_mempool_ops {
 	 * store specified number of objects.
 	 */
 	rte_mempool_calc_mem_size_t calc_mem_size;
+	/**
+	 * Optional callback to populate mempool objects using
+	 * provided memory chunk.
+	 */
+	rte_mempool_populate_t populate;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -649,6 +711,34 @@ ssize_t rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
 				      size_t *min_chunk_size, size_t *align);
 
 /**
+ * @internal wrapper for mempool_ops populate callback.
+ *
+ * Populate memory pool objects using provided memory chunk.
+ *
+ * @param[in] mp
+ *   A pointer to the mempool structure.
+ * @param[in] max_objs
+ *   Maximum number of objects to be populated.
+ * @param[in] vaddr
+ *   The virtual address of memory that should be used to store objects.
+ * @param[in] iova
+ *   The IO address
+ * @param[in] len
+ *   The length of memory in bytes.
+ * @param[in] obj_cb
+ *   Callback function to be executed for each populated object.
+ * @param[in] obj_cb_arg
+ *   An opaque pointer passed to the callback function.
+ * @return
+ *   The number of objects added on success.
+ *   On error, no objects are populated and a negative errno is returned.
+ */
+int rte_mempool_ops_populate(struct rte_mempool *mp, unsigned int max_objs,
+			     void *vaddr, rte_iova_t iova, size_t len,
+			     rte_mempool_populate_obj_cb_t *obj_cb,
+			     void *obj_cb_arg);
+
+/**
  * @internal wrapper for mempool_ops free callback.
  *
  * @param mp
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 26908cc..1a7f39f 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -60,6 +60,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->get_capabilities = h->get_capabilities;
 	ops->register_memory_area = h->register_memory_area;
 	ops->calc_mem_size = h->calc_mem_size;
+	ops->populate = h->populate;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -141,6 +142,26 @@ rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
 	return ops->calc_mem_size(mp, obj_num, pg_shift, min_chunk_size, align);
 }
 
+/* wrapper to populate memory pool objects using provided memory chunk */
+int
+rte_mempool_ops_populate(struct rte_mempool *mp, unsigned int max_objs,
+				void *vaddr, rte_iova_t iova, size_t len,
+				rte_mempool_populate_obj_cb_t *obj_cb,
+				void *obj_cb_arg)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+
+	if (ops->populate == NULL)
+		return rte_mempool_op_populate_default(mp, max_objs, vaddr,
+						       iova, len, obj_cb,
+						       obj_cb_arg);
+
+	return ops->populate(mp, max_objs, vaddr, iova, len, obj_cb,
+			     obj_cb_arg);
+}
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c
index 57fe79b..57295f7 100644
--- a/lib/librte_mempool/rte_mempool_ops_default.c
+++ b/lib/librte_mempool/rte_mempool_ops_default.c
@@ -36,3 +36,27 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
 
 	return mem_size;
 }
+
+int
+rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int max_objs,
+		void *vaddr, rte_iova_t iova, size_t len,
+		rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
+{
+	size_t total_elt_sz;
+	size_t off;
+	unsigned int i;
+	void *obj;
+
+	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
+
+	for (off = 0, i = 0; off + total_elt_sz <= len && i < max_objs; i++) {
+		off += mp->header_size;
+		obj = (char *)vaddr + off;
+		obj_cb(mp, obj_cb_arg, obj,
+		       (iova == RTE_BAD_IOVA) ? RTE_BAD_IOVA : (iova + off));
+		rte_mempool_ops_enqueue_bulk(mp, &obj, 1);
+		off += mp->elt_size + mp->trailer_size;
+	}
+
+	return i;
+}
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index cb38189..41a0b09 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -56,5 +56,6 @@ DPDK_18.05 {
 	global:
 
 	rte_mempool_op_calc_mem_size_default;
+	rte_mempool_op_populate_default;
 
 } DPDK_17.11;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v4 06/11] mempool: remove callback to get capabilities
  2018-04-16 13:24 ` [PATCH v4 00/11] mempool: prepare to " Andrew Rybchenko
                     ` (4 preceding siblings ...)
  2018-04-16 13:24   ` [PATCH v4 05/11] mempool: add op to populate objects using provided memory Andrew Rybchenko
@ 2018-04-16 13:24   ` Andrew Rybchenko
  2018-04-16 13:24   ` [PATCH v4 07/11] mempool: deprecate xmem functions Andrew Rybchenko
                     ` (5 subsequent siblings)
  11 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-16 13:24 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Santosh Shukla, Jerin Jacob

The callback was introduced to let generic code to know octeontx
mempool driver requirements to use single physically contiguous
memory chunk to store all objects and align object address to
total object size. Now these requirements are met using a new
callbacks to calculate required memory chunk size and to populate
objects using provided memory chunk.

These capability flags are not used anywhere else.

Restricting capabilities to flags is not generic and likely to
be insufficient to describe mempool driver features. If required
in the future, API which returns structured information may be
added.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Santosh Shukla <Santosh.Shukla@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
v3 -> v4:
 - rebase

v2 -> v3:
 - none

v1 -> v2:
 - fix typo
 - rebase on top of patch which renames MEMPOOL_F_NO_PHYS_CONTIG

RFCv2 -> v1:
 - squash mempool/octeontx patches to add calc_mem_size and populate
   callbacks to this one in order to avoid breakages in the middle of
   patchset
 - advertise API changes in release notes

 doc/guides/rel_notes/deprecation.rst            |  1 -
 doc/guides/rel_notes/release_18_05.rst          | 11 +++++
 drivers/mempool/octeontx/rte_mempool_octeontx.c | 59 +++++++++++++++++++++----
 lib/librte_mempool/rte_mempool.c                | 34 ++------------
 lib/librte_mempool/rte_mempool.h                | 52 +---------------------
 lib/librte_mempool/rte_mempool_ops.c            | 14 ------
 lib/librte_mempool/rte_mempool_ops_default.c    | 15 +------
 lib/librte_mempool/rte_mempool_version.map      |  1 -
 8 files changed, 68 insertions(+), 119 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 575da18..99a0b01 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -58,7 +58,6 @@ Deprecation Notices
 
   The following changes are planned:
 
-  - removal of ``get_capabilities`` mempool ops and related flags.
   - substitute ``register_memory_area`` with ``populate`` ops.
   - addition of new op to allocate contiguous
     block of objects if underlying driver supports it.
diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index 5c6588e..f481eea 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -173,6 +173,14 @@ API Changes
    fall-back value. Previously, setting ``nb_tx_desc`` to zero would have
    resulted in an error.
 
+* **Removed mempool capability flags and related functions.**
+
+  Flags ``MEMPOOL_F_CAPA_PHYS_CONTIG`` and
+  ``MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS`` were used by octeontx mempool
+  driver to customize generic mempool library behaviour.
+  Now the new driver callbacks ``calc_mem_size`` and ``populate`` may be
+  used to achieve it without specific knowledge in the generic code.
+
 
 ABI Changes
 -----------
@@ -201,6 +209,9 @@ ABI Changes
   to allow to customize required memory size calculation.
   A new callback ``populate`` has been added to ``rte_mempool_ops``
   to allow to customize objects population.
+  Callback ``get_capabilities`` has been removed from ``rte_mempool_ops``
+  since its features are covered by ``calc_mem_size`` and ``populate``
+  callbacks.
 
 
 Removed Items
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index d143d05..64ed528 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -126,14 +126,29 @@ octeontx_fpavf_get_count(const struct rte_mempool *mp)
 	return octeontx_fpa_bufpool_free_count(pool);
 }
 
-static int
-octeontx_fpavf_get_capabilities(const struct rte_mempool *mp,
-				unsigned int *flags)
+static ssize_t
+octeontx_fpavf_calc_mem_size(const struct rte_mempool *mp,
+			     uint32_t obj_num, uint32_t pg_shift,
+			     size_t *min_chunk_size, size_t *align)
 {
-	RTE_SET_USED(mp);
-	*flags |= (MEMPOOL_F_CAPA_PHYS_CONTIG |
-			MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS);
-	return 0;
+	ssize_t mem_size;
+
+	/*
+	 * Simply need space for one more object to be able to
+	 * fulfil alignment requirements.
+	 */
+	mem_size = rte_mempool_op_calc_mem_size_default(mp, obj_num + 1,
+							pg_shift,
+							min_chunk_size, align);
+	if (mem_size >= 0) {
+		/*
+		 * Memory area which contains objects must be physically
+		 * contiguous.
+		 */
+		*min_chunk_size = mem_size;
+	}
+
+	return mem_size;
 }
 
 static int
@@ -150,6 +165,33 @@ octeontx_fpavf_register_memory_area(const struct rte_mempool *mp,
 	return octeontx_fpavf_pool_set_range(pool_bar, len, vaddr, gpool);
 }
 
+static int
+octeontx_fpavf_populate(struct rte_mempool *mp, unsigned int max_objs,
+			void *vaddr, rte_iova_t iova, size_t len,
+			rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
+{
+	size_t total_elt_sz;
+	size_t off;
+
+	if (iova == RTE_BAD_IOVA)
+		return -EINVAL;
+
+	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
+
+	/* align object start address to a multiple of total_elt_sz */
+	off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
+
+	if (len < off)
+		return -EINVAL;
+
+	vaddr = (char *)vaddr + off;
+	iova += off;
+	len -= off;
+
+	return rte_mempool_op_populate_default(mp, max_objs, vaddr, iova, len,
+					       obj_cb, obj_cb_arg);
+}
+
 static struct rte_mempool_ops octeontx_fpavf_ops = {
 	.name = "octeontx_fpavf",
 	.alloc = octeontx_fpavf_alloc,
@@ -157,8 +199,9 @@ static struct rte_mempool_ops octeontx_fpavf_ops = {
 	.enqueue = octeontx_fpavf_enqueue,
 	.dequeue = octeontx_fpavf_dequeue,
 	.get_count = octeontx_fpavf_get_count,
-	.get_capabilities = octeontx_fpavf_get_capabilities,
 	.register_memory_area = octeontx_fpavf_register_memory_area,
+	.calc_mem_size = octeontx_fpavf_calc_mem_size,
+	.populate = octeontx_fpavf_populate,
 };
 
 MEMPOOL_REGISTER_OPS(octeontx_fpavf_ops);
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 68ae12f..5c75c16 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -231,15 +231,9 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  */
 size_t
 rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
-		      unsigned int flags)
+		      __rte_unused unsigned int flags)
 {
 	size_t obj_per_page, pg_num, pg_sz;
-	unsigned int mask;
-
-	mask = MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS | MEMPOOL_F_CAPA_PHYS_CONTIG;
-	if ((flags & mask) == mask)
-		/* alignment need one additional object */
-		elt_num += 1;
 
 	if (total_elt_sz == 0)
 		return 0;
@@ -263,18 +257,12 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
 ssize_t
 rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const rte_iova_t iova[], uint32_t pg_num,
-	uint32_t pg_shift, unsigned int flags)
+	uint32_t pg_shift, __rte_unused unsigned int flags)
 {
 	uint32_t elt_cnt = 0;
 	rte_iova_t start, end;
 	uint32_t iova_idx;
 	size_t pg_sz = (size_t)1 << pg_shift;
-	unsigned int mask;
-
-	mask = MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS | MEMPOOL_F_CAPA_PHYS_CONTIG;
-	if ((flags & mask) == mask)
-		/* alignment need one additional object */
-		elt_num += 1;
 
 	/* if iova is NULL, assume contiguous memory */
 	if (iova == NULL) {
@@ -368,8 +356,6 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
 	void *opaque)
 {
-	unsigned total_elt_sz;
-	unsigned int mp_capa_flags;
 	unsigned i = 0;
 	size_t off;
 	struct rte_mempool_memhdr *memhdr;
@@ -388,17 +374,6 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	if (mp->populated_size >= mp->size)
 		return -ENOSPC;
 
-	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
-
-	/* Get mempool capabilities */
-	mp_capa_flags = 0;
-	ret = rte_mempool_ops_get_capabilities(mp, &mp_capa_flags);
-	if ((ret < 0) && (ret != -ENOTSUP))
-		return ret;
-
-	/* update mempool capabilities */
-	mp->flags |= mp_capa_flags;
-
 	memhdr = rte_zmalloc("MEMPOOL_MEMHDR", sizeof(*memhdr), 0);
 	if (memhdr == NULL)
 		return -ENOMEM;
@@ -410,10 +385,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	memhdr->free_cb = free_cb;
 	memhdr->opaque = opaque;
 
-	if (mp_capa_flags & MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS)
-		/* align object start address to a multiple of total_elt_sz */
-		off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
-	else if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
 		off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
 	else
 		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 754261e..0b83d5e 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -246,24 +246,6 @@ struct rte_mempool {
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 /**< Don't need IOVA contiguous objs. */
 #define MEMPOOL_F_NO_PHYS_CONTIG MEMPOOL_F_NO_IOVA_CONTIG /* deprecated */
-/**
- * This capability flag is advertised by a mempool handler, if the whole
- * memory area containing the objects must be physically contiguous.
- * Note: This flag should not be passed by application.
- */
-#define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040
-/**
- * This capability flag is advertised by a mempool handler. Used for a case
- * where mempool driver wants object start address(vaddr) aligned to block
- * size(/ total element size).
- *
- * Note:
- * - This flag should not be passed by application.
- *   Flag used for mempool driver only.
- * - Mempool driver must also set MEMPOOL_F_CAPA_PHYS_CONTIG flag along with
- *   MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS.
- */
-#define MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS 0x0080
 
 /**
  * @internal When debug is enabled, store some statistics.
@@ -389,12 +371,6 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
 typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
 
 /**
- * Get the mempool capabilities.
- */
-typedef int (*rte_mempool_get_capabilities_t)(const struct rte_mempool *mp,
-		unsigned int *flags);
-
-/**
  * Notify new memory area to mempool.
  */
 typedef int (*rte_mempool_ops_register_memory_area_t)
@@ -440,13 +416,7 @@ typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp,
  * that pages are grouped in subsets of physically continuous pages big
  * enough to store at least one object.
  *
- * If mempool driver requires object addresses to be block size aligned
- * (MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS), space for one extra element is
- * reserved to be able to meet the requirement.
- *
- * Minimum size of memory chunk is either all required space, if
- * capabilities say that whole memory area must be physically contiguous
- * (MEMPOOL_F_CAPA_PHYS_CONTIG), or a maximum of the page size and total
+ * Minimum size of memory chunk is a maximum of the page size and total
  * element size.
  *
  * Required memory chunk alignment is a maximum of page size and cache
@@ -522,10 +492,6 @@ struct rte_mempool_ops {
 	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
 	rte_mempool_get_count get_count; /**< Get qty of available objs. */
 	/**
-	 * Get the mempool capabilities
-	 */
-	rte_mempool_get_capabilities_t get_capabilities;
-	/**
 	 * Notify new memory area to mempool
 	 */
 	rte_mempool_ops_register_memory_area_t register_memory_area;
@@ -651,22 +617,6 @@ unsigned
 rte_mempool_ops_get_count(const struct rte_mempool *mp);
 
 /**
- * @internal wrapper for mempool_ops get_capabilities callback.
- *
- * @param mp [in]
- *   Pointer to the memory pool.
- * @param flags [out]
- *   Pointer to the mempool flags.
- * @return
- *   - 0: Success; The mempool driver has advertised his pool capabilities in
- *   flags param.
- *   - -ENOTSUP - doesn't support get_capabilities ops (valid case).
- *   - Otherwise, pool create fails.
- */
-int
-rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
-					unsigned int *flags);
-/**
  * @internal wrapper for mempool_ops register_memory_area callback.
  * API to notify the mempool handler when a new memory area is added to pool.
  *
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 1a7f39f..6ac669a 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -57,7 +57,6 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->enqueue = h->enqueue;
 	ops->dequeue = h->dequeue;
 	ops->get_count = h->get_count;
-	ops->get_capabilities = h->get_capabilities;
 	ops->register_memory_area = h->register_memory_area;
 	ops->calc_mem_size = h->calc_mem_size;
 	ops->populate = h->populate;
@@ -99,19 +98,6 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp)
 	return ops->get_count(mp);
 }
 
-/* wrapper to get external mempool capabilities. */
-int
-rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
-					unsigned int *flags)
-{
-	struct rte_mempool_ops *ops;
-
-	ops = rte_mempool_get_ops(mp->ops_index);
-
-	RTE_FUNC_PTR_OR_ERR_RET(ops->get_capabilities, -ENOTSUP);
-	return ops->get_capabilities(mp, flags);
-}
-
 /* wrapper to notify new memory area to external mempool */
 int
 rte_mempool_ops_register_memory_area(const struct rte_mempool *mp, char *vaddr,
diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c
index 57295f7..3defc15 100644
--- a/lib/librte_mempool/rte_mempool_ops_default.c
+++ b/lib/librte_mempool/rte_mempool_ops_default.c
@@ -11,26 +11,15 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
 				     uint32_t obj_num, uint32_t pg_shift,
 				     size_t *min_chunk_size, size_t *align)
 {
-	unsigned int mp_flags;
-	int ret;
 	size_t total_elt_sz;
 	size_t mem_size;
 
-	/* Get mempool capabilities */
-	mp_flags = 0;
-	ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
-	if ((ret < 0) && (ret != -ENOTSUP))
-		return ret;
-
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 
 	mem_size = rte_mempool_xmem_size(obj_num, total_elt_sz, pg_shift,
-					 mp->flags | mp_flags);
+					 mp->flags);
 
-	if (mp_flags & MEMPOOL_F_CAPA_PHYS_CONTIG)
-		*min_chunk_size = mem_size;
-	else
-		*min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz);
+	*min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz);
 
 	*align = RTE_MAX((size_t)RTE_CACHE_LINE_SIZE, (size_t)1 << pg_shift);
 
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 41a0b09..637f73f 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -45,7 +45,6 @@ DPDK_16.07 {
 DPDK_17.11 {
 	global:
 
-	rte_mempool_ops_get_capabilities;
 	rte_mempool_ops_register_memory_area;
 	rte_mempool_populate_iova;
 	rte_mempool_populate_iova_tab;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v4 07/11] mempool: deprecate xmem functions
  2018-04-16 13:24 ` [PATCH v4 00/11] mempool: prepare to " Andrew Rybchenko
                     ` (5 preceding siblings ...)
  2018-04-16 13:24   ` [PATCH v4 06/11] mempool: remove callback to get capabilities Andrew Rybchenko
@ 2018-04-16 13:24   ` Andrew Rybchenko
  2018-04-16 13:24   ` [PATCH v4 08/11] mempool/octeontx: prepare to remove register memory area op Andrew Rybchenko
                     ` (4 subsequent siblings)
  11 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-16 13:24 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Thomas Monjalon

Move rte_mempool_xmem_size() code to internal helper function
since it is required in two places: deprecated rte_mempool_xmem_size()
and non-deprecated rte_mempool_op_calc_mem_size_default().

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
v2 -> v3:
 - none

v1 -> v2:
 - deprecate rte_mempool_populate_iova_tab()
 - add -Wno-deprecated-declarations to fix build errors because of
   rte_mempool_populate_iova_tab() deprecation
 - add @deprecated to deprecated functions description

RFCv2 -> v1:
 - advertise deprecation in release notes
 - factor out default memory size calculation into non-deprecated
   internal function to avoid usage of deprecated function internally
 - remove test for deprecated functions to address build issue because
   of usage of deprecated functions (it is easy to allow usage of
   deprecated function in Makefile, but very complicated in meson)

 doc/guides/rel_notes/deprecation.rst         |  7 -------
 doc/guides/rel_notes/release_18_05.rst       | 11 ++++++++++
 lib/librte_mempool/Makefile                  |  3 +++
 lib/librte_mempool/meson.build               | 12 +++++++++++
 lib/librte_mempool/rte_mempool.c             | 19 ++++++++++++++---
 lib/librte_mempool/rte_mempool.h             | 30 +++++++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops_default.c |  4 ++--
 test/test/test_mempool.c                     | 31 ----------------------------
 8 files changed, 74 insertions(+), 43 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 99a0b01..8d1b362 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -48,13 +48,6 @@ Deprecation Notices
   - ``rte_eal_mbuf_default_mempool_ops``
 
 * mempool: several API and ABI changes are planned in v18.05.
-  The following functions, introduced for Xen, which is not supported
-  anymore since v17.11, are hard to use, not used anywhere else in DPDK.
-  Therefore they will be deprecated in v18.05 and removed in v18.08:
-
-  - ``rte_mempool_xmem_create``
-  - ``rte_mempool_xmem_size``
-  - ``rte_mempool_xmem_usage``
 
   The following changes are planned:
 
diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index f481eea..3869d04 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -181,6 +181,17 @@ API Changes
   Now the new driver callbacks ``calc_mem_size`` and ``populate`` may be
   used to achieve it without specific knowledge in the generic code.
 
+* **Deprecated mempool xmem functions.**
+
+  The following functions, introduced for Xen, which is not supported
+  anymore since v17.11, are hard to use, not used anywhere else in DPDK.
+  Therefore they were deprecated in v18.05 and will be removed in v18.08:
+
+  - ``rte_mempool_xmem_create``
+  - ``rte_mempool_xmem_size``
+  - ``rte_mempool_xmem_usage``
+  - ``rte_mempool_populate_iova_tab``
+
 
 ABI Changes
 -----------
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 421e2a7..7f19f00 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -7,6 +7,9 @@ include $(RTE_SDK)/mk/rte.vars.mk
 LIB = librte_mempool.a
 
 CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+# Allow deprecated symbol to use deprecated rte_mempool_populate_iova_tab()
+# from earlier deprecated rte_mempool_populate_phys_tab()
+CFLAGS += -Wno-deprecated-declarations
 LDLIBS += -lrte_eal -lrte_ring
 
 EXPORT_MAP := rte_mempool_version.map
diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build
index 6181ad8..baf2d24 100644
--- a/lib/librte_mempool/meson.build
+++ b/lib/librte_mempool/meson.build
@@ -1,6 +1,18 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2017 Intel Corporation
 
+extra_flags = []
+
+# Allow deprecated symbol to use deprecated rte_mempool_populate_iova_tab()
+# from earlier deprecated rte_mempool_populate_phys_tab()
+extra_flags += '-Wno-deprecated-declarations'
+
+foreach flag: extra_flags
+	if cc.has_argument(flag)
+		cflags += flag
+	endif
+endforeach
+
 version = 4
 sources = files('rte_mempool.c', 'rte_mempool_ops.c',
 		'rte_mempool_ops_default.c')
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 5c75c16..c63c363 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -227,11 +227,13 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 
 
 /*
- * Calculate maximum amount of memory required to store given number of objects.
+ * Internal function to calculate required memory chunk size shared
+ * by default implementation of the corresponding callback and
+ * deprecated external function.
  */
 size_t
-rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
-		      __rte_unused unsigned int flags)
+rte_mempool_calc_mem_size_helper(uint32_t elt_num, size_t total_elt_sz,
+				 uint32_t pg_shift)
 {
 	size_t obj_per_page, pg_num, pg_sz;
 
@@ -251,6 +253,17 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
 }
 
 /*
+ * Calculate maximum amount of memory required to store given number of objects.
+ */
+size_t
+rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
+		      __rte_unused unsigned int flags)
+{
+	return rte_mempool_calc_mem_size_helper(elt_num, total_elt_sz,
+						pg_shift);
+}
+
+/*
  * Calculate how much memory would be actually required with the
  * given memory footprint to store required number of elements.
  */
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 0b83d5e..9107f5a 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -427,6 +427,28 @@ ssize_t rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
 		size_t *min_chunk_size, size_t *align);
 
 /**
+ * @internal Helper function to calculate memory size required to store
+ * specified number of objects in assumption that the memory buffer will
+ * be aligned at page boundary.
+ *
+ * Note that if object size is bigger than page size, then it assumes
+ * that pages are grouped in subsets of physically continuous pages big
+ * enough to store at least one object.
+ *
+ * @param elt_num
+ *   Number of elements.
+ * @param total_elt_sz
+ *   The size of each element, including header and trailer, as returned
+ *   by rte_mempool_calc_obj_size().
+ * @param pg_shift
+ *   LOG2 of the physical pages size. If set to 0, ignore page boundaries.
+ * @return
+ *   Required memory size aligned at page boundary.
+ */
+size_t rte_mempool_calc_mem_size_helper(uint32_t elt_num, size_t total_elt_sz,
+		uint32_t pg_shift);
+
+/**
  * Function to be called for each populated object.
  *
  * @param[in] mp
@@ -855,6 +877,7 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
 		   int socket_id, unsigned flags);
 
 /**
+ * @deprecated
  * Create a new mempool named *name* in memory.
  *
  * The pool contains n elements of elt_size. Its size is set to n.
@@ -912,6 +935,7 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
  *   The pointer to the new allocated mempool, on success. NULL on error
  *   with rte_errno set appropriately. See rte_mempool_create() for details.
  */
+__rte_deprecated
 struct rte_mempool *
 rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 		unsigned cache_size, unsigned private_data_size,
@@ -1008,6 +1032,7 @@ int rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	void *opaque);
 
 /**
+ * @deprecated
  * Add physical memory for objects in the pool at init
  *
  * Add a virtually contiguous memory chunk in the pool where objects can
@@ -1033,6 +1058,7 @@ int rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
  *   On error, the chunks are not added in the memory list of the
  *   mempool and a negative errno is returned.
  */
+__rte_deprecated
 int rte_mempool_populate_iova_tab(struct rte_mempool *mp, char *vaddr,
 	const rte_iova_t iova[], uint32_t pg_num, uint32_t pg_shift,
 	rte_mempool_memchunk_free_cb_t *free_cb, void *opaque);
@@ -1652,6 +1678,7 @@ uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 	struct rte_mempool_objsz *sz);
 
 /**
+ * @deprecated
  * Get the size of memory required to store mempool elements.
  *
  * Calculate the maximum amount of memory required to store given number
@@ -1674,10 +1701,12 @@ uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  * @return
  *   Required memory size aligned at page boundary.
  */
+__rte_deprecated
 size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
 	uint32_t pg_shift, unsigned int flags);
 
 /**
+ * @deprecated
  * Get the size of memory required to store mempool elements.
  *
  * Calculate how much memory would be actually required with the given
@@ -1705,6 +1734,7 @@ size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
  *   buffer is too small, return a negative value whose absolute value
  *   is the actual number of elements that can be stored in that buffer.
  */
+__rte_deprecated
 ssize_t rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const rte_iova_t iova[], uint32_t pg_num,
 	uint32_t pg_shift, unsigned int flags);
diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c
index 3defc15..fd63ca1 100644
--- a/lib/librte_mempool/rte_mempool_ops_default.c
+++ b/lib/librte_mempool/rte_mempool_ops_default.c
@@ -16,8 +16,8 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
 
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 
-	mem_size = rte_mempool_xmem_size(obj_num, total_elt_sz, pg_shift,
-					 mp->flags);
+	mem_size = rte_mempool_calc_mem_size_helper(obj_num, total_elt_sz,
+						    pg_shift);
 
 	*min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz);
 
diff --git a/test/test/test_mempool.c b/test/test/test_mempool.c
index 63f921e..8d29af2 100644
--- a/test/test/test_mempool.c
+++ b/test/test/test_mempool.c
@@ -444,34 +444,6 @@ test_mempool_same_name_twice_creation(void)
 	return 0;
 }
 
-/*
- * Basic test for mempool_xmem functions.
- */
-static int
-test_mempool_xmem_misc(void)
-{
-	uint32_t elt_num, total_size;
-	size_t sz;
-	ssize_t usz;
-
-	elt_num = MAX_KEEP;
-	total_size = rte_mempool_calc_obj_size(MEMPOOL_ELT_SIZE, 0, NULL);
-	sz = rte_mempool_xmem_size(elt_num, total_size, MEMPOOL_PG_SHIFT_MAX,
-					0);
-
-	usz = rte_mempool_xmem_usage(NULL, elt_num, total_size, 0, 1,
-		MEMPOOL_PG_SHIFT_MAX, 0);
-
-	if (sz != (size_t)usz)  {
-		printf("failure @ %s: rte_mempool_xmem_usage(%u, %u) "
-			"returns: %#zx, while expected: %#zx;\n",
-			__func__, elt_num, total_size, sz, (size_t)usz);
-		return -1;
-	}
-
-	return 0;
-}
-
 static void
 walk_cb(struct rte_mempool *mp, void *userdata __rte_unused)
 {
@@ -596,9 +568,6 @@ test_mempool(void)
 	if (test_mempool_same_name_twice_creation() < 0)
 		goto err;
 
-	if (test_mempool_xmem_misc() < 0)
-		goto err;
-
 	/* test the stack handler */
 	if (test_mempool_basic(mp_stack, 1) < 0)
 		goto err;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v4 08/11] mempool/octeontx: prepare to remove register memory area op
  2018-04-16 13:24 ` [PATCH v4 00/11] mempool: prepare to " Andrew Rybchenko
                     ` (6 preceding siblings ...)
  2018-04-16 13:24   ` [PATCH v4 07/11] mempool: deprecate xmem functions Andrew Rybchenko
@ 2018-04-16 13:24   ` Andrew Rybchenko
  2018-04-16 13:24   ` [PATCH v4 09/11] mempool/dpaa: " Andrew Rybchenko
                     ` (3 subsequent siblings)
  11 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-16 13:24 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Santosh Shukla, Jerin Jacob

Callback to populate pool objects has all required information and
executed a bit later than register memory area callback.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Santosh Shukla <Santosh.Shukla@caviumnetworks.com>
---
v3 -> v4:
 - none

v2 -> v3:
 - none

v1 -> v2
 - none

 drivers/mempool/octeontx/rte_mempool_octeontx.c | 25 ++++++++++---------------
 1 file changed, 10 insertions(+), 15 deletions(-)

diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index 64ed528..ab94dfe 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -152,26 +152,15 @@ octeontx_fpavf_calc_mem_size(const struct rte_mempool *mp,
 }
 
 static int
-octeontx_fpavf_register_memory_area(const struct rte_mempool *mp,
-				    char *vaddr, rte_iova_t paddr, size_t len)
-{
-	RTE_SET_USED(paddr);
-	uint8_t gpool;
-	uintptr_t pool_bar;
-
-	gpool = octeontx_fpa_bufpool_gpool(mp->pool_id);
-	pool_bar = mp->pool_id & ~(uint64_t)FPA_GPOOL_MASK;
-
-	return octeontx_fpavf_pool_set_range(pool_bar, len, vaddr, gpool);
-}
-
-static int
 octeontx_fpavf_populate(struct rte_mempool *mp, unsigned int max_objs,
 			void *vaddr, rte_iova_t iova, size_t len,
 			rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
 {
 	size_t total_elt_sz;
 	size_t off;
+	uint8_t gpool;
+	uintptr_t pool_bar;
+	int ret;
 
 	if (iova == RTE_BAD_IOVA)
 		return -EINVAL;
@@ -188,6 +177,13 @@ octeontx_fpavf_populate(struct rte_mempool *mp, unsigned int max_objs,
 	iova += off;
 	len -= off;
 
+	gpool = octeontx_fpa_bufpool_gpool(mp->pool_id);
+	pool_bar = mp->pool_id & ~(uint64_t)FPA_GPOOL_MASK;
+
+	ret = octeontx_fpavf_pool_set_range(pool_bar, len, vaddr, gpool);
+	if (ret < 0)
+		return ret;
+
 	return rte_mempool_op_populate_default(mp, max_objs, vaddr, iova, len,
 					       obj_cb, obj_cb_arg);
 }
@@ -199,7 +195,6 @@ static struct rte_mempool_ops octeontx_fpavf_ops = {
 	.enqueue = octeontx_fpavf_enqueue,
 	.dequeue = octeontx_fpavf_dequeue,
 	.get_count = octeontx_fpavf_get_count,
-	.register_memory_area = octeontx_fpavf_register_memory_area,
 	.calc_mem_size = octeontx_fpavf_calc_mem_size,
 	.populate = octeontx_fpavf_populate,
 };
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v4 09/11] mempool/dpaa: prepare to remove register memory area op
  2018-04-16 13:24 ` [PATCH v4 00/11] mempool: prepare to " Andrew Rybchenko
                     ` (7 preceding siblings ...)
  2018-04-16 13:24   ` [PATCH v4 08/11] mempool/octeontx: prepare to remove register memory area op Andrew Rybchenko
@ 2018-04-16 13:24   ` Andrew Rybchenko
  2018-04-16 13:24   ` [PATCH v4 10/11] mempool: remove callback to register memory area Andrew Rybchenko
                     ` (2 subsequent siblings)
  11 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-16 13:24 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Hemant Agrawal, Shreyansh Jain

Populate mempool driver callback is executed a bit later than
register memory area, provides the same information and will
substitute the later since it gives more flexibility and in addition
to notification about memory area allows to customize how mempool
objects are stored in memory.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
v3 -> v4:
 - none

v2 -> v3:
 - fix build error because of prototype mismatch (char * -> void *)

v1 -> v2:
 - fix build error because of prototype mismatch

 drivers/mempool/dpaa/dpaa_mempool.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c
index 7b82f4b..580e464 100644
--- a/drivers/mempool/dpaa/dpaa_mempool.c
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -264,10 +264,9 @@ dpaa_mbuf_get_count(const struct rte_mempool *mp)
 }
 
 static int
-dpaa_register_memory_area(const struct rte_mempool *mp,
-			  char *vaddr __rte_unused,
-			  rte_iova_t paddr __rte_unused,
-			  size_t len)
+dpaa_populate(struct rte_mempool *mp, unsigned int max_objs,
+	      void *vaddr, rte_iova_t paddr, size_t len,
+	      rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
 {
 	struct dpaa_bp_info *bp_info;
 	unsigned int total_elt_sz;
@@ -289,7 +288,9 @@ dpaa_register_memory_area(const struct rte_mempool *mp,
 	if (len >= total_elt_sz * mp->size)
 		bp_info->flags |= DPAA_MPOOL_SINGLE_SEGMENT;
 
-	return 0;
+	return rte_mempool_op_populate_default(mp, max_objs, vaddr, paddr, len,
+					       obj_cb, obj_cb_arg);
+
 }
 
 struct rte_mempool_ops dpaa_mpool_ops = {
@@ -299,7 +300,7 @@ struct rte_mempool_ops dpaa_mpool_ops = {
 	.enqueue = dpaa_mbuf_free_bulk,
 	.dequeue = dpaa_mbuf_alloc_bulk,
 	.get_count = dpaa_mbuf_get_count,
-	.register_memory_area = dpaa_register_memory_area,
+	.populate = dpaa_populate,
 };
 
 MEMPOOL_REGISTER_OPS(dpaa_mpool_ops);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v4 10/11] mempool: remove callback to register memory area
  2018-04-16 13:24 ` [PATCH v4 00/11] mempool: prepare to " Andrew Rybchenko
                     ` (8 preceding siblings ...)
  2018-04-16 13:24   ` [PATCH v4 09/11] mempool/dpaa: " Andrew Rybchenko
@ 2018-04-16 13:24   ` Andrew Rybchenko
  2018-04-16 13:24   ` [PATCH v4 11/11] mempool: support flushing the default cache of the mempool Andrew Rybchenko
  2018-04-24  0:20   ` [PATCH v4 00/11] mempool: prepare to add bucket driver Thomas Monjalon
  11 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-16 13:24 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ

The callback is not required any more since there is a new callback
to populate objects using provided memory area which provides
the same information.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Santosh Shukla <Santosh.Shukla@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
v3 -> v4:
 - none

v2 -> v3:
 - none

v1 -> v2:
 - none

RFCv2 -> v1:
 - advertise ABI changes in release notes

 doc/guides/rel_notes/deprecation.rst       |  1 -
 doc/guides/rel_notes/release_18_05.rst     |  2 ++
 lib/librte_mempool/rte_mempool.c           |  5 -----
 lib/librte_mempool/rte_mempool.h           | 31 ------------------------------
 lib/librte_mempool/rte_mempool_ops.c       | 14 --------------
 lib/librte_mempool/rte_mempool_version.map |  1 -
 6 files changed, 2 insertions(+), 52 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 8d1b362..02ffcd4 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -51,7 +51,6 @@ Deprecation Notices
 
   The following changes are planned:
 
-  - substitute ``register_memory_area`` with ``populate`` ops.
   - addition of new op to allocate contiguous
     block of objects if underlying driver supports it.
 
diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index 3869d04..3ed4aae 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -223,6 +223,8 @@ ABI Changes
   Callback ``get_capabilities`` has been removed from ``rte_mempool_ops``
   since its features are covered by ``calc_mem_size`` and ``populate``
   callbacks.
+  Callback ``register_memory_area`` has been removed from ``rte_mempool_ops``
+  since the new callback ``populate`` may be used instead of it.
 
 
 Removed Items
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index c63c363..84b3d64 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -378,11 +378,6 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	if (ret != 0)
 		return ret;
 
-	/* Notify memory area to mempool */
-	ret = rte_mempool_ops_register_memory_area(mp, vaddr, iova, len);
-	if (ret != -ENOTSUP && ret < 0)
-		return ret;
-
 	/* mempool is already populated */
 	if (mp->populated_size >= mp->size)
 		return -ENOSPC;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 9107f5a..314f909 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -371,12 +371,6 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
 typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
 
 /**
- * Notify new memory area to mempool.
- */
-typedef int (*rte_mempool_ops_register_memory_area_t)
-(const struct rte_mempool *mp, char *vaddr, rte_iova_t iova, size_t len);
-
-/**
  * Calculate memory size required to store given number of objects.
  *
  * If mempool objects are not required to be IOVA-contiguous
@@ -514,10 +508,6 @@ struct rte_mempool_ops {
 	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
 	rte_mempool_get_count get_count; /**< Get qty of available objs. */
 	/**
-	 * Notify new memory area to mempool
-	 */
-	rte_mempool_ops_register_memory_area_t register_memory_area;
-	/**
 	 * Optional callback to calculate memory size required to
 	 * store specified number of objects.
 	 */
@@ -639,27 +629,6 @@ unsigned
 rte_mempool_ops_get_count(const struct rte_mempool *mp);
 
 /**
- * @internal wrapper for mempool_ops register_memory_area callback.
- * API to notify the mempool handler when a new memory area is added to pool.
- *
- * @param mp
- *   Pointer to the memory pool.
- * @param vaddr
- *   Pointer to the buffer virtual address.
- * @param iova
- *   Pointer to the buffer IO address.
- * @param len
- *   Pool size.
- * @return
- *   - 0: Success;
- *   - -ENOTSUP - doesn't support register_memory_area ops (valid error case).
- *   - Otherwise, rte_mempool_populate_phys fails thus pool create fails.
- */
-int
-rte_mempool_ops_register_memory_area(const struct rte_mempool *mp,
-				char *vaddr, rte_iova_t iova, size_t len);
-
-/**
  * @internal wrapper for mempool_ops calc_mem_size callback.
  * API to calculate size of memory required to store specified number of
  * object.
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 6ac669a..ea9be1e 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -57,7 +57,6 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->enqueue = h->enqueue;
 	ops->dequeue = h->dequeue;
 	ops->get_count = h->get_count;
-	ops->register_memory_area = h->register_memory_area;
 	ops->calc_mem_size = h->calc_mem_size;
 	ops->populate = h->populate;
 
@@ -99,19 +98,6 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp)
 }
 
 /* wrapper to notify new memory area to external mempool */
-int
-rte_mempool_ops_register_memory_area(const struct rte_mempool *mp, char *vaddr,
-					rte_iova_t iova, size_t len)
-{
-	struct rte_mempool_ops *ops;
-
-	ops = rte_mempool_get_ops(mp->ops_index);
-
-	RTE_FUNC_PTR_OR_ERR_RET(ops->register_memory_area, -ENOTSUP);
-	return ops->register_memory_area(mp, vaddr, iova, len);
-}
-
-/* wrapper to notify new memory area to external mempool */
 ssize_t
 rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
 				uint32_t obj_num, uint32_t pg_shift,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 637f73f..cf375db 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -45,7 +45,6 @@ DPDK_16.07 {
 DPDK_17.11 {
 	global:
 
-	rte_mempool_ops_register_memory_area;
 	rte_mempool_populate_iova;
 	rte_mempool_populate_iova_tab;
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v4 11/11] mempool: support flushing the default cache of the mempool
  2018-04-16 13:24 ` [PATCH v4 00/11] mempool: prepare to " Andrew Rybchenko
                     ` (9 preceding siblings ...)
  2018-04-16 13:24   ` [PATCH v4 10/11] mempool: remove callback to register memory area Andrew Rybchenko
@ 2018-04-16 13:24   ` Andrew Rybchenko
  2018-04-24  0:20   ` [PATCH v4 00/11] mempool: prepare to add bucket driver Thomas Monjalon
  11 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-16 13:24 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Mempool get/put API cares about cache itself, but sometimes it is
required to flush the cache explicitly.

The function is moved in the file since it now requires
rte_mempool_default_cache().

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
v3 -> v4:
 - none

v2 -> v3:
 - none

v1 -> v2:
 - none

 lib/librte_mempool/rte_mempool.h | 36 ++++++++++++++++++++----------------
 1 file changed, 20 insertions(+), 16 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 314f909..3e06ae0 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -1169,22 +1169,6 @@ void
 rte_mempool_cache_free(struct rte_mempool_cache *cache);
 
 /**
- * Flush a user-owned mempool cache to the specified mempool.
- *
- * @param cache
- *   A pointer to the mempool cache.
- * @param mp
- *   A pointer to the mempool.
- */
-static __rte_always_inline void
-rte_mempool_cache_flush(struct rte_mempool_cache *cache,
-			struct rte_mempool *mp)
-{
-	rte_mempool_ops_enqueue_bulk(mp, cache->objs, cache->len);
-	cache->len = 0;
-}
-
-/**
  * Get a pointer to the per-lcore default mempool cache.
  *
  * @param mp
@@ -1207,6 +1191,26 @@ rte_mempool_default_cache(struct rte_mempool *mp, unsigned lcore_id)
 }
 
 /**
+ * Flush a user-owned mempool cache to the specified mempool.
+ *
+ * @param cache
+ *   A pointer to the mempool cache.
+ * @param mp
+ *   A pointer to the mempool.
+ */
+static __rte_always_inline void
+rte_mempool_cache_flush(struct rte_mempool_cache *cache,
+			struct rte_mempool *mp)
+{
+	if (cache == NULL)
+		cache = rte_mempool_default_cache(mp, rte_lcore_id());
+	if (cache == NULL || cache->len == 0)
+		return;
+	rte_mempool_ops_enqueue_bulk(mp, cache->objs, cache->len);
+	cache->len = 0;
+}
+
+/**
  * @internal Put several objects back in the mempool; used internally.
  * @param mp
  *   A pointer to the mempool structure.
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v2 0/6] mempool: add bucket driver
  2017-11-24 16:06 [RFC PATCH 0/6] mempool: add bucket mempool driver Andrew Rybchenko
                   ` (8 preceding siblings ...)
  2018-04-16 13:24 ` [PATCH v4 00/11] mempool: prepare to " Andrew Rybchenko
@ 2018-04-16 13:33 ` Andrew Rybchenko
  2018-04-16 13:33   ` [PATCH v2 1/6] mempool/bucket: implement bucket mempool manager Andrew Rybchenko
                     ` (6 more replies)
  2018-04-25 16:32 ` [PATCH v3 " Andrew Rybchenko
  2018-04-26 10:59 ` [PATCH v4 0/5] mempool: add bucket driver Andrew Rybchenko
  11 siblings, 7 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-16 13:33 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ

The initial patch series [1] (RFCv1 is [2]) is split into two to simplify
processing.  It is the second part which relies on the first one [3].

It should be applied on top of [3].

The patch series adds bucket mempool driver which allows to allocate
(both physically and virtually) contiguous blocks of objects and adds
mempool API to do it. It is still capable to provide separate objects,
but it is definitely more heavy-weight than ring/stack drivers.
The driver will be used by the future Solarflare driver enhancements
which allow to utilize physical contiguous blocks in the NIC firmware.

The target usecase is dequeue in blocks and enqueue separate objects
back (which are collected in buckets to be dequeued). So, the memory
pool with bucket driver is created by an application and provided to
networking PMD receive queue. The choice of bucket driver is done using
rte_eth_dev_pool_ops_supported(). A PMD that relies upon contiguous
block allocation should report the bucket driver as the only supported
and preferred one.

Introduction of the contiguous block dequeue operation is proven by
performance measurements using autotest with minor enhancements:
 - in the original test bulks are powers of two, which is unacceptable
   for us, so they are changed to multiple of contig_block_size;
 - the test code is duplicated to support plain dequeue and
   dequeue_contig_blocks;
 - all the extra test variations (with/without cache etc) are eliminated;
 - a fake read from the dequeued buffer is added (in both cases) to
   simulate mbufs access.

start performance test for bucket (without cache)
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Srate_persec=   111935488
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Srate_persec=   115290931
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Srate_persec=   353055539
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Srate_persec=   353330790
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Srate_persec=   224657407
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Srate_persec=   230411468
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Srate_persec=   706700902
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Srate_persec=   703673139
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Srate_persec=   425236887
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Srate_persec=   437295512
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Srate_persec=  1343409356
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Srate_persec=  1336567397
start performance test for bucket (without cache + contiguous dequeue)
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Crate_persec=   122945536
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Crate_persec=   126458265
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Crate_persec=   374262988
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Crate_persec=   377316966
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Crate_persec=   244842496
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Crate_persec=   251618917
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Crate_persec=   751226060
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Crate_persec=   756233010
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Crate_persec=   462068120
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Crate_persec=   476997221
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Crate_persec=  1432171313
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Crate_persec=  1438829771

The number of objects in the contiguous block is a function of bucket
memory size (.config option) and total element size. In the future
additional API with possibility to pass parameters on mempool allocation
may be added.

It breaks ABI since changes rte_mempool_ops. The ABI version is already
bumped in [4].


[1] https://dpdk.org/ml/archives/dev/2018-January/088698.html
[2] https://dpdk.org/ml/archives/dev/2017-November/082335.html
[3] https://dpdk.org/ml/archives/dev/2018-April/097354.html
[4] https://dpdk.org/ml/archives/dev/2018-April/097352.html

v1 -> v2:
  - just rebase

RFCv2 -> v1:
  - rebased on top of [3]
  - cleanup deprecation notice when it is done
  - mark a new API experimental
  - move contig blocks dequeue debug checks/processing to the library function
  - add contig blocks get stats
  - add release notes

RFCv1 -> RFCv2:
  - change info API to get information from driver required to
    API user to know contiguous block size
  - use SPDX tags
  - avoid all objects affinity to single lcore
  - fix bucket get_count
  - fix NO_CACHE_ALIGN case in bucket mempool



Andrew Rybchenko (1):
  doc: advertise bucket mempool driver

Artem V. Andreev (5):
  mempool/bucket: implement bucket mempool manager
  mempool: implement abstract mempool info API
  mempool: support block dequeue operation
  mempool/bucket: implement block dequeue operation
  mempool/bucket: do not allow one lcore to grab all buckets

 MAINTAINERS                                        |   9 +
 config/common_base                                 |   2 +
 doc/guides/rel_notes/deprecation.rst               |   7 -
 doc/guides/rel_notes/release_18_05.rst             |   9 +
 drivers/mempool/Makefile                           |   1 +
 drivers/mempool/bucket/Makefile                    |  27 +
 drivers/mempool/bucket/meson.build                 |   9 +
 drivers/mempool/bucket/rte_mempool_bucket.c        | 627 +++++++++++++++++++++
 .../mempool/bucket/rte_mempool_bucket_version.map  |   4 +
 lib/librte_mempool/Makefile                        |   1 +
 lib/librte_mempool/meson.build                     |   2 +
 lib/librte_mempool/rte_mempool.c                   |  39 ++
 lib/librte_mempool/rte_mempool.h                   | 190 +++++++
 lib/librte_mempool/rte_mempool_ops.c               |  16 +
 lib/librte_mempool/rte_mempool_version.map         |   8 +
 mk/rte.app.mk                                      |   1 +
 16 files changed, 945 insertions(+), 7 deletions(-)
 create mode 100644 drivers/mempool/bucket/Makefile
 create mode 100644 drivers/mempool/bucket/meson.build
 create mode 100644 drivers/mempool/bucket/rte_mempool_bucket.c
 create mode 100644 drivers/mempool/bucket/rte_mempool_bucket_version.map

-- 
2.7.4

^ permalink raw reply	[flat|nested] 197+ messages in thread

* [PATCH v2 1/6] mempool/bucket: implement bucket mempool manager
  2018-04-16 13:33 ` [PATCH v2 0/6] mempool: " Andrew Rybchenko
@ 2018-04-16 13:33   ` Andrew Rybchenko
  2018-04-16 13:33   ` [PATCH v2 2/6] mempool: implement abstract mempool info API Andrew Rybchenko
                     ` (5 subsequent siblings)
  6 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-16 13:33 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

The manager provides a way to allocate physically and virtually
contiguous set of objects.

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 MAINTAINERS                                        |   9 +
 config/common_base                                 |   2 +
 drivers/mempool/Makefile                           |   1 +
 drivers/mempool/bucket/Makefile                    |  27 +
 drivers/mempool/bucket/meson.build                 |   9 +
 drivers/mempool/bucket/rte_mempool_bucket.c        | 562 +++++++++++++++++++++
 .../mempool/bucket/rte_mempool_bucket_version.map  |   4 +
 mk/rte.app.mk                                      |   1 +
 8 files changed, 615 insertions(+)
 create mode 100644 drivers/mempool/bucket/Makefile
 create mode 100644 drivers/mempool/bucket/meson.build
 create mode 100644 drivers/mempool/bucket/rte_mempool_bucket.c
 create mode 100644 drivers/mempool/bucket/rte_mempool_bucket_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 431442e..0d2305f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -365,6 +365,15 @@ F: test/test/test_rawdev.c
 F: doc/guides/prog_guide/rawdev.rst
 
 
+Memory Pool Drivers
+-------------------
+
+Bucket memory pool
+M: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
+M: Andrew Rybchenko <arybchenko@solarflare.com>
+F: drivers/mempool/bucket/
+
+
 Bus Drivers
 -----------
 
diff --git a/config/common_base b/config/common_base
index c2b0d91..d722de5 100644
--- a/config/common_base
+++ b/config/common_base
@@ -615,6 +615,8 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
 #
 # Compile Mempool drivers
 #
+CONFIG_RTE_DRIVER_MEMPOOL_BUCKET=y
+CONFIG_RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB=64
 CONFIG_RTE_DRIVER_MEMPOOL_RING=y
 CONFIG_RTE_DRIVER_MEMPOOL_STACK=y
 
diff --git a/drivers/mempool/Makefile b/drivers/mempool/Makefile
index fc8b73b..28c2e83 100644
--- a/drivers/mempool/Makefile
+++ b/drivers/mempool/Makefile
@@ -3,6 +3,7 @@
 
 include $(RTE_SDK)/mk/rte.vars.mk
 
+DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_BUCKET) += bucket
 ifeq ($(CONFIG_RTE_LIBRTE_DPAA_BUS),y)
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL) += dpaa
 endif
diff --git a/drivers/mempool/bucket/Makefile b/drivers/mempool/bucket/Makefile
new file mode 100644
index 0000000..7364916
--- /dev/null
+++ b/drivers/mempool/bucket/Makefile
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: BSD-3-Clause
+#
+# Copyright (c) 2017-2018 Solarflare Communications Inc.
+# All rights reserved.
+#
+# This software was jointly developed between OKTET Labs (under contract
+# for Solarflare) and Solarflare Communications, Inc.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_mempool_bucket.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+LDLIBS += -lrte_eal -lrte_mempool -lrte_ring
+
+EXPORT_MAP := rte_mempool_bucket_version.map
+
+LIBABIVER := 1
+
+SRCS-$(CONFIG_RTE_DRIVER_MEMPOOL_BUCKET) += rte_mempool_bucket.c
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/mempool/bucket/meson.build b/drivers/mempool/bucket/meson.build
new file mode 100644
index 0000000..618d791
--- /dev/null
+++ b/drivers/mempool/bucket/meson.build
@@ -0,0 +1,9 @@
+# SPDX-License-Identifier: BSD-3-Clause
+#
+# Copyright (c) 2017-2018 Solarflare Communications Inc.
+# All rights reserved.
+#
+# This software was jointly developed between OKTET Labs (under contract
+# for Solarflare) and Solarflare Communications, Inc.
+
+sources = files('rte_mempool_bucket.c')
diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c
new file mode 100644
index 0000000..5a1bd79
--- /dev/null
+++ b/drivers/mempool/bucket/rte_mempool_bucket.c
@@ -0,0 +1,562 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2017-2018 Solarflare Communications Inc.
+ * All rights reserved.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+#include <stdbool.h>
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+
+/*
+ * The general idea of the bucket mempool driver is as follows.
+ * We keep track of physically contiguous groups (buckets) of objects
+ * of a certain size. Every such a group has a counter that is
+ * incremented every time an object from that group is enqueued.
+ * Until the bucket is full, no objects from it are eligible for allocation.
+ * If a request is made to dequeue a multiply of bucket size, it is
+ * satisfied by returning the whole buckets, instead of separate objects.
+ */
+
+
+struct bucket_header {
+	unsigned int lcore_id;
+	uint8_t fill_cnt;
+};
+
+struct bucket_stack {
+	unsigned int top;
+	unsigned int limit;
+	void *objects[];
+};
+
+struct bucket_data {
+	unsigned int header_size;
+	unsigned int total_elt_size;
+	unsigned int obj_per_bucket;
+	uintptr_t bucket_page_mask;
+	struct rte_ring *shared_bucket_ring;
+	struct bucket_stack *buckets[RTE_MAX_LCORE];
+	/*
+	 * Multi-producer single-consumer ring to hold objects that are
+	 * returned to the mempool at a different lcore than initially
+	 * dequeued
+	 */
+	struct rte_ring *adoption_buffer_rings[RTE_MAX_LCORE];
+	struct rte_ring *shared_orphan_ring;
+	struct rte_mempool *pool;
+	unsigned int bucket_mem_size;
+};
+
+static struct bucket_stack *
+bucket_stack_create(const struct rte_mempool *mp, unsigned int n_elts)
+{
+	struct bucket_stack *stack;
+
+	stack = rte_zmalloc_socket("bucket_stack",
+				   sizeof(struct bucket_stack) +
+				   n_elts * sizeof(void *),
+				   RTE_CACHE_LINE_SIZE,
+				   mp->socket_id);
+	if (stack == NULL)
+		return NULL;
+	stack->limit = n_elts;
+	stack->top = 0;
+
+	return stack;
+}
+
+static void
+bucket_stack_push(struct bucket_stack *stack, void *obj)
+{
+	RTE_ASSERT(stack->top < stack->limit);
+	stack->objects[stack->top++] = obj;
+}
+
+static void *
+bucket_stack_pop_unsafe(struct bucket_stack *stack)
+{
+	RTE_ASSERT(stack->top > 0);
+	return stack->objects[--stack->top];
+}
+
+static void *
+bucket_stack_pop(struct bucket_stack *stack)
+{
+	if (stack->top == 0)
+		return NULL;
+	return bucket_stack_pop_unsafe(stack);
+}
+
+static int
+bucket_enqueue_single(struct bucket_data *bd, void *obj)
+{
+	int rc = 0;
+	uintptr_t addr = (uintptr_t)obj;
+	struct bucket_header *hdr;
+	unsigned int lcore_id = rte_lcore_id();
+
+	addr &= bd->bucket_page_mask;
+	hdr = (struct bucket_header *)addr;
+
+	if (likely(hdr->lcore_id == lcore_id)) {
+		if (hdr->fill_cnt < bd->obj_per_bucket - 1) {
+			hdr->fill_cnt++;
+		} else {
+			hdr->fill_cnt = 0;
+			/* Stack is big enough to put all buckets */
+			bucket_stack_push(bd->buckets[lcore_id], hdr);
+		}
+	} else if (hdr->lcore_id != LCORE_ID_ANY) {
+		struct rte_ring *adopt_ring =
+			bd->adoption_buffer_rings[hdr->lcore_id];
+
+		rc = rte_ring_enqueue(adopt_ring, obj);
+		/* Ring is big enough to put all objects */
+		RTE_ASSERT(rc == 0);
+	} else if (hdr->fill_cnt < bd->obj_per_bucket - 1) {
+		hdr->fill_cnt++;
+	} else {
+		hdr->fill_cnt = 0;
+		rc = rte_ring_enqueue(bd->shared_bucket_ring, hdr);
+		/* Ring is big enough to put all buckets */
+		RTE_ASSERT(rc == 0);
+	}
+
+	return rc;
+}
+
+static int
+bucket_enqueue(struct rte_mempool *mp, void * const *obj_table,
+	       unsigned int n)
+{
+	struct bucket_data *bd = mp->pool_data;
+	unsigned int i;
+	int rc = 0;
+
+	for (i = 0; i < n; i++) {
+		rc = bucket_enqueue_single(bd, obj_table[i]);
+		RTE_ASSERT(rc == 0);
+	}
+	return rc;
+}
+
+static void **
+bucket_fill_obj_table(const struct bucket_data *bd, void **pstart,
+		      void **obj_table, unsigned int n)
+{
+	unsigned int i;
+	uint8_t *objptr = *pstart;
+
+	for (objptr += bd->header_size, i = 0; i < n;
+	     i++, objptr += bd->total_elt_size)
+		*obj_table++ = objptr;
+	*pstart = objptr;
+	return obj_table;
+}
+
+static int
+bucket_dequeue_orphans(struct bucket_data *bd, void **obj_table,
+		       unsigned int n_orphans)
+{
+	unsigned int i;
+	int rc;
+	uint8_t *objptr;
+
+	rc = rte_ring_dequeue_bulk(bd->shared_orphan_ring, obj_table,
+				   n_orphans, NULL);
+	if (unlikely(rc != (int)n_orphans)) {
+		struct bucket_header *hdr;
+
+		objptr = bucket_stack_pop(bd->buckets[rte_lcore_id()]);
+		hdr = (struct bucket_header *)objptr;
+
+		if (objptr == NULL) {
+			rc = rte_ring_dequeue(bd->shared_bucket_ring,
+					      (void **)&objptr);
+			if (rc != 0) {
+				rte_errno = ENOBUFS;
+				return -rte_errno;
+			}
+			hdr = (struct bucket_header *)objptr;
+			hdr->lcore_id = rte_lcore_id();
+		}
+		hdr->fill_cnt = 0;
+		bucket_fill_obj_table(bd, (void **)&objptr, obj_table,
+				      n_orphans);
+		for (i = n_orphans; i < bd->obj_per_bucket; i++,
+			     objptr += bd->total_elt_size) {
+			rc = rte_ring_enqueue(bd->shared_orphan_ring,
+					      objptr);
+			if (rc != 0) {
+				RTE_ASSERT(0);
+				rte_errno = -rc;
+				return rc;
+			}
+		}
+	}
+
+	return 0;
+}
+
+static int
+bucket_dequeue_buckets(struct bucket_data *bd, void **obj_table,
+		       unsigned int n_buckets)
+{
+	struct bucket_stack *cur_stack = bd->buckets[rte_lcore_id()];
+	unsigned int n_buckets_from_stack = RTE_MIN(n_buckets, cur_stack->top);
+	void **obj_table_base = obj_table;
+
+	n_buckets -= n_buckets_from_stack;
+	while (n_buckets_from_stack-- > 0) {
+		void *obj = bucket_stack_pop_unsafe(cur_stack);
+
+		obj_table = bucket_fill_obj_table(bd, &obj, obj_table,
+						  bd->obj_per_bucket);
+	}
+	while (n_buckets-- > 0) {
+		struct bucket_header *hdr;
+
+		if (unlikely(rte_ring_dequeue(bd->shared_bucket_ring,
+					      (void **)&hdr) != 0)) {
+			/*
+			 * Return the already-dequeued buffers
+			 * back to the mempool
+			 */
+			bucket_enqueue(bd->pool, obj_table_base,
+				       obj_table - obj_table_base);
+			rte_errno = ENOBUFS;
+			return -rte_errno;
+		}
+		hdr->lcore_id = rte_lcore_id();
+		obj_table = bucket_fill_obj_table(bd, (void **)&hdr,
+						  obj_table,
+						  bd->obj_per_bucket);
+	}
+
+	return 0;
+}
+
+static int
+bucket_adopt_orphans(struct bucket_data *bd)
+{
+	int rc = 0;
+	struct rte_ring *adopt_ring =
+		bd->adoption_buffer_rings[rte_lcore_id()];
+
+	if (unlikely(!rte_ring_empty(adopt_ring))) {
+		void *orphan;
+
+		while (rte_ring_sc_dequeue(adopt_ring, &orphan) == 0) {
+			rc = bucket_enqueue_single(bd, orphan);
+			RTE_ASSERT(rc == 0);
+		}
+	}
+	return rc;
+}
+
+static int
+bucket_dequeue(struct rte_mempool *mp, void **obj_table, unsigned int n)
+{
+	struct bucket_data *bd = mp->pool_data;
+	unsigned int n_buckets = n / bd->obj_per_bucket;
+	unsigned int n_orphans = n - n_buckets * bd->obj_per_bucket;
+	int rc = 0;
+
+	bucket_adopt_orphans(bd);
+
+	if (unlikely(n_orphans > 0)) {
+		rc = bucket_dequeue_orphans(bd, obj_table +
+					    (n_buckets * bd->obj_per_bucket),
+					    n_orphans);
+		if (rc != 0)
+			return rc;
+	}
+
+	if (likely(n_buckets > 0)) {
+		rc = bucket_dequeue_buckets(bd, obj_table, n_buckets);
+		if (unlikely(rc != 0) && n_orphans > 0) {
+			rte_ring_enqueue_bulk(bd->shared_orphan_ring,
+					      obj_table + (n_buckets *
+							   bd->obj_per_bucket),
+					      n_orphans, NULL);
+		}
+	}
+
+	return rc;
+}
+
+static void
+count_underfilled_buckets(struct rte_mempool *mp,
+			  void *opaque,
+			  struct rte_mempool_memhdr *memhdr,
+			  __rte_unused unsigned int mem_idx)
+{
+	unsigned int *pcount = opaque;
+	const struct bucket_data *bd = mp->pool_data;
+	unsigned int bucket_page_sz =
+		(unsigned int)(~bd->bucket_page_mask + 1);
+	uintptr_t align;
+	uint8_t *iter;
+
+	align = (uintptr_t)RTE_PTR_ALIGN_CEIL(memhdr->addr, bucket_page_sz) -
+		(uintptr_t)memhdr->addr;
+
+	for (iter = (uint8_t *)memhdr->addr + align;
+	     iter < (uint8_t *)memhdr->addr + memhdr->len;
+	     iter += bucket_page_sz) {
+		struct bucket_header *hdr = (struct bucket_header *)iter;
+
+		*pcount += hdr->fill_cnt;
+	}
+}
+
+static unsigned int
+bucket_get_count(const struct rte_mempool *mp)
+{
+	const struct bucket_data *bd = mp->pool_data;
+	unsigned int count =
+		bd->obj_per_bucket * rte_ring_count(bd->shared_bucket_ring) +
+		rte_ring_count(bd->shared_orphan_ring);
+	unsigned int i;
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		if (!rte_lcore_is_enabled(i))
+			continue;
+		count += bd->obj_per_bucket * bd->buckets[i]->top;
+	}
+
+	rte_mempool_mem_iter((struct rte_mempool *)(uintptr_t)mp,
+			     count_underfilled_buckets, &count);
+
+	return count;
+}
+
+static int
+bucket_alloc(struct rte_mempool *mp)
+{
+	int rg_flags = 0;
+	int rc = 0;
+	char rg_name[RTE_RING_NAMESIZE];
+	struct bucket_data *bd;
+	unsigned int i;
+	unsigned int bucket_header_size;
+
+	bd = rte_zmalloc_socket("bucket_pool", sizeof(*bd),
+				RTE_CACHE_LINE_SIZE, mp->socket_id);
+	if (bd == NULL) {
+		rc = -ENOMEM;
+		goto no_mem_for_data;
+	}
+	bd->pool = mp;
+	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+		bucket_header_size = sizeof(struct bucket_header);
+	else
+		bucket_header_size = RTE_CACHE_LINE_SIZE;
+	RTE_BUILD_BUG_ON(sizeof(struct bucket_header) > RTE_CACHE_LINE_SIZE);
+	bd->header_size = mp->header_size + bucket_header_size;
+	bd->total_elt_size = mp->header_size + mp->elt_size + mp->trailer_size;
+	bd->bucket_mem_size = RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB * 1024;
+	bd->obj_per_bucket = (bd->bucket_mem_size - bucket_header_size) /
+		bd->total_elt_size;
+	bd->bucket_page_mask = ~(rte_align64pow2(bd->bucket_mem_size) - 1);
+
+	if (mp->flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (mp->flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		if (!rte_lcore_is_enabled(i))
+			continue;
+		bd->buckets[i] =
+			bucket_stack_create(mp, mp->size / bd->obj_per_bucket);
+		if (bd->buckets[i] == NULL) {
+			rc = -ENOMEM;
+			goto no_mem_for_stacks;
+		}
+		rc = snprintf(rg_name, sizeof(rg_name),
+			      RTE_MEMPOOL_MZ_FORMAT ".a%u", mp->name, i);
+		if (rc < 0 || rc >= (int)sizeof(rg_name)) {
+			rc = -ENAMETOOLONG;
+			goto no_mem_for_stacks;
+		}
+		bd->adoption_buffer_rings[i] =
+			rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+					mp->socket_id,
+					rg_flags | RING_F_SC_DEQ);
+		if (bd->adoption_buffer_rings[i] == NULL) {
+			rc = -rte_errno;
+			goto no_mem_for_stacks;
+		}
+	}
+
+	rc = snprintf(rg_name, sizeof(rg_name),
+		      RTE_MEMPOOL_MZ_FORMAT ".0", mp->name);
+	if (rc < 0 || rc >= (int)sizeof(rg_name)) {
+		rc = -ENAMETOOLONG;
+		goto invalid_shared_orphan_ring;
+	}
+	bd->shared_orphan_ring =
+		rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+				mp->socket_id, rg_flags);
+	if (bd->shared_orphan_ring == NULL) {
+		rc = -rte_errno;
+		goto cannot_create_shared_orphan_ring;
+	}
+
+	rc = snprintf(rg_name, sizeof(rg_name),
+		       RTE_MEMPOOL_MZ_FORMAT ".1", mp->name);
+	if (rc < 0 || rc >= (int)sizeof(rg_name)) {
+		rc = -ENAMETOOLONG;
+		goto invalid_shared_bucket_ring;
+	}
+	bd->shared_bucket_ring =
+		rte_ring_create(rg_name,
+				rte_align32pow2((mp->size + 1) /
+						bd->obj_per_bucket),
+				mp->socket_id, rg_flags);
+	if (bd->shared_bucket_ring == NULL) {
+		rc = -rte_errno;
+		goto cannot_create_shared_bucket_ring;
+	}
+
+	mp->pool_data = bd;
+
+	return 0;
+
+cannot_create_shared_bucket_ring:
+invalid_shared_bucket_ring:
+	rte_ring_free(bd->shared_orphan_ring);
+cannot_create_shared_orphan_ring:
+invalid_shared_orphan_ring:
+no_mem_for_stacks:
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		rte_free(bd->buckets[i]);
+		rte_ring_free(bd->adoption_buffer_rings[i]);
+	}
+	rte_free(bd);
+no_mem_for_data:
+	rte_errno = -rc;
+	return rc;
+}
+
+static void
+bucket_free(struct rte_mempool *mp)
+{
+	unsigned int i;
+	struct bucket_data *bd = mp->pool_data;
+
+	if (bd == NULL)
+		return;
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		rte_free(bd->buckets[i]);
+		rte_ring_free(bd->adoption_buffer_rings[i]);
+	}
+
+	rte_ring_free(bd->shared_orphan_ring);
+	rte_ring_free(bd->shared_bucket_ring);
+
+	rte_free(bd);
+}
+
+static ssize_t
+bucket_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num,
+		     __rte_unused uint32_t pg_shift, size_t *min_total_elt_size,
+		     size_t *align)
+{
+	struct bucket_data *bd = mp->pool_data;
+	unsigned int bucket_page_sz;
+
+	if (bd == NULL)
+		return -EINVAL;
+
+	bucket_page_sz = rte_align32pow2(bd->bucket_mem_size);
+	*align = bucket_page_sz;
+	*min_total_elt_size = bucket_page_sz;
+	/*
+	 * Each bucket occupies its own block aligned to
+	 * bucket_page_sz, so the required amount of memory is
+	 * a multiple of bucket_page_sz.
+	 * We also need extra space for a bucket header
+	 */
+	return ((obj_num + bd->obj_per_bucket - 1) /
+		bd->obj_per_bucket) * bucket_page_sz;
+}
+
+static int
+bucket_populate(struct rte_mempool *mp, unsigned int max_objs,
+		void *vaddr, rte_iova_t iova, size_t len,
+		rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
+{
+	struct bucket_data *bd = mp->pool_data;
+	unsigned int bucket_page_sz;
+	unsigned int bucket_header_sz;
+	unsigned int n_objs;
+	uintptr_t align;
+	uint8_t *iter;
+	int rc;
+
+	if (bd == NULL)
+		return -EINVAL;
+
+	bucket_page_sz = rte_align32pow2(bd->bucket_mem_size);
+	align = RTE_PTR_ALIGN_CEIL((uintptr_t)vaddr, bucket_page_sz) -
+		(uintptr_t)vaddr;
+
+	bucket_header_sz = bd->header_size - mp->header_size;
+	if (iova != RTE_BAD_IOVA)
+		iova += align + bucket_header_sz;
+
+	for (iter = (uint8_t *)vaddr + align, n_objs = 0;
+	     iter < (uint8_t *)vaddr + len && n_objs < max_objs;
+	     iter += bucket_page_sz) {
+		struct bucket_header *hdr = (struct bucket_header *)iter;
+		unsigned int chunk_len = bd->bucket_mem_size;
+
+		if ((size_t)(iter - (uint8_t *)vaddr) + chunk_len > len)
+			chunk_len = len - (iter - (uint8_t *)vaddr);
+		if (chunk_len <= bucket_header_sz)
+			break;
+		chunk_len -= bucket_header_sz;
+
+		hdr->fill_cnt = 0;
+		hdr->lcore_id = LCORE_ID_ANY;
+		rc = rte_mempool_op_populate_default(mp,
+						     RTE_MIN(bd->obj_per_bucket,
+							     max_objs - n_objs),
+						     iter + bucket_header_sz,
+						     iova, chunk_len,
+						     obj_cb, obj_cb_arg);
+		if (rc < 0)
+			return rc;
+		n_objs += rc;
+		if (iova != RTE_BAD_IOVA)
+			iova += bucket_page_sz;
+	}
+
+	return n_objs;
+}
+
+static const struct rte_mempool_ops ops_bucket = {
+	.name = "bucket",
+	.alloc = bucket_alloc,
+	.free = bucket_free,
+	.enqueue = bucket_enqueue,
+	.dequeue = bucket_dequeue,
+	.get_count = bucket_get_count,
+	.calc_mem_size = bucket_calc_mem_size,
+	.populate = bucket_populate,
+};
+
+
+MEMPOOL_REGISTER_OPS(ops_bucket);
diff --git a/drivers/mempool/bucket/rte_mempool_bucket_version.map b/drivers/mempool/bucket/rte_mempool_bucket_version.map
new file mode 100644
index 0000000..9b9ab1a
--- /dev/null
+++ b/drivers/mempool/bucket/rte_mempool_bucket_version.map
@@ -0,0 +1,4 @@
+DPDK_18.05 {
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 8bab901..026e328 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -125,6 +125,7 @@ endif
 ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),n)
 # plugins (link only if static libraries)
 
+_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_BUCKET) += -lrte_mempool_bucket
 _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_STACK)  += -lrte_mempool_stack
 ifeq ($(CONFIG_RTE_LIBRTE_DPAA_BUS),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL)   += -lrte_mempool_dpaa
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v2 2/6] mempool: implement abstract mempool info API
  2018-04-16 13:33 ` [PATCH v2 0/6] mempool: " Andrew Rybchenko
  2018-04-16 13:33   ` [PATCH v2 1/6] mempool/bucket: implement bucket mempool manager Andrew Rybchenko
@ 2018-04-16 13:33   ` Andrew Rybchenko
  2018-04-25  8:44     ` Olivier Matz
  2018-04-16 13:33   ` [PATCH v2 3/6] mempool: support block dequeue operation Andrew Rybchenko
                     ` (4 subsequent siblings)
  6 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-16 13:33 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Primarily, it is intended as a way for the mempool driver to provide
additional information on how it lays up objects inside the mempool.

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 lib/librte_mempool/rte_mempool.h           | 41 ++++++++++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c       | 15 +++++++++++
 lib/librte_mempool/rte_mempool_version.map |  7 +++++
 3 files changed, 63 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 3e06ae0..1ac2f57 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -190,6 +190,14 @@ struct rte_mempool_memhdr {
 };
 
 /**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Additional information about the mempool
+ */
+struct rte_mempool_info;
+
+/**
  * The RTE mempool structure.
  */
 struct rte_mempool {
@@ -499,6 +507,16 @@ int rte_mempool_op_populate_default(struct rte_mempool *mp,
 		void *vaddr, rte_iova_t iova, size_t len,
 		rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Get some additional information about a mempool.
+ */
+typedef int (*rte_mempool_get_info_t)(const struct rte_mempool *mp,
+		struct rte_mempool_info *info);
+
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -517,6 +535,10 @@ struct rte_mempool_ops {
 	 * provided memory chunk.
 	 */
 	rte_mempool_populate_t populate;
+	/**
+	 * Get mempool info
+	 */
+	rte_mempool_get_info_t get_info;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -680,6 +702,25 @@ int rte_mempool_ops_populate(struct rte_mempool *mp, unsigned int max_objs,
 			     void *obj_cb_arg);
 
 /**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Wrapper for mempool_ops get_info callback.
+ *
+ * @param[in] mp
+ *   Pointer to the memory pool.
+ * @param[out] info
+ *   Pointer to the rte_mempool_info structure
+ * @return
+ *   - 0: Success; The mempool driver supports retrieving supplementary
+ *        mempool information
+ *   - -ENOTSUP - doesn't support get_info ops (valid case).
+ */
+__rte_experimental
+int rte_mempool_ops_get_info(const struct rte_mempool *mp,
+			 struct rte_mempool_info *info);
+
+/**
  * @internal wrapper for mempool_ops free callback.
  *
  * @param mp
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index ea9be1e..efc1c08 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -59,6 +59,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->get_count = h->get_count;
 	ops->calc_mem_size = h->calc_mem_size;
 	ops->populate = h->populate;
+	ops->get_info = h->get_info;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -134,6 +135,20 @@ rte_mempool_ops_populate(struct rte_mempool *mp, unsigned int max_objs,
 			     obj_cb_arg);
 }
 
+/* wrapper to get additional mempool info */
+int
+rte_mempool_ops_get_info(const struct rte_mempool *mp,
+			 struct rte_mempool_info *info)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->get_info, -ENOTSUP);
+	return ops->get_info(mp, info);
+}
+
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index cf375db..c9d16ec 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -57,3 +57,10 @@ DPDK_18.05 {
 	rte_mempool_op_populate_default;
 
 } DPDK_17.11;
+
+EXPERIMENTAL {
+	global:
+
+	rte_mempool_ops_get_info;
+
+} DPDK_18.05;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v2 3/6] mempool: support block dequeue operation
  2018-04-16 13:33 ` [PATCH v2 0/6] mempool: " Andrew Rybchenko
  2018-04-16 13:33   ` [PATCH v2 1/6] mempool/bucket: implement bucket mempool manager Andrew Rybchenko
  2018-04-16 13:33   ` [PATCH v2 2/6] mempool: implement abstract mempool info API Andrew Rybchenko
@ 2018-04-16 13:33   ` Andrew Rybchenko
  2018-04-25  8:45     ` Olivier Matz
  2018-04-16 13:33   ` [PATCH v2 4/6] mempool/bucket: implement " Andrew Rybchenko
                     ` (3 subsequent siblings)
  6 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-16 13:33 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

If mempool manager supports object blocks (physically and virtual
contiguous set of objects), it is sufficient to get the first
object only and the function allows to avoid filling in of
information about each block member.

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 doc/guides/rel_notes/deprecation.rst       |   7 --
 lib/librte_mempool/Makefile                |   1 +
 lib/librte_mempool/meson.build             |   2 +
 lib/librte_mempool/rte_mempool.c           |  39 ++++++++
 lib/librte_mempool/rte_mempool.h           | 151 ++++++++++++++++++++++++++++-
 lib/librte_mempool/rte_mempool_ops.c       |   1 +
 lib/librte_mempool/rte_mempool_version.map |   1 +
 7 files changed, 194 insertions(+), 8 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 6d9a0c8..f3284c5 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -47,13 +47,6 @@ Deprecation Notices
 
   - ``rte_eal_mbuf_default_mempool_ops``
 
-* mempool: several API and ABI changes are planned in v18.05.
-
-  The following changes are planned:
-
-  - addition of new op to allocate contiguous
-    block of objects if underlying driver supports it.
-
 * mbuf: The opaque ``mbuf->hash.sched`` field will be updated to support generic
   definition in line with the ethdev TM and MTR APIs. Currently, this field
   is defined in librte_sched in a non-generic way. The new generic format
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 7f19f00..e3c32b1 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -10,6 +10,7 @@ CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
 # Allow deprecated symbol to use deprecated rte_mempool_populate_iova_tab()
 # from earlier deprecated rte_mempool_populate_phys_tab()
 CFLAGS += -Wno-deprecated-declarations
+CFLAGS += -DALLOW_EXPERIMENTAL_API
 LDLIBS += -lrte_eal -lrte_ring
 
 EXPORT_MAP := rte_mempool_version.map
diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build
index baf2d24..d507e55 100644
--- a/lib/librte_mempool/meson.build
+++ b/lib/librte_mempool/meson.build
@@ -1,6 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2017 Intel Corporation
 
+allow_experimental_apis = true
+
 extra_flags = []
 
 # Allow deprecated symbol to use deprecated rte_mempool_populate_iova_tab()
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 84b3d64..cf5d124 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -1255,6 +1255,36 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 #endif
 }
 
+void
+rte_mempool_contig_blocks_check_cookies(const struct rte_mempool *mp,
+	void * const *first_obj_table_const, unsigned int n, int free)
+{
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	struct rte_mempool_info info;
+	const size_t total_elt_sz =
+		mp->header_size + mp->elt_size + mp->trailer_size;
+	unsigned int i, j;
+
+	rte_mempool_ops_get_info(mp, &info);
+
+	for (i = 0; i < n; ++i) {
+		void *first_obj = first_obj_table_const[i];
+
+		for (j = 0; j < info.contig_block_size; ++j) {
+			void *obj;
+
+			obj = (void *)((uintptr_t)first_obj + j * total_elt_sz);
+			rte_mempool_check_cookies(mp, &obj, 1, free);
+		}
+	}
+#else
+	RTE_SET_USED(mp);
+	RTE_SET_USED(first_obj_table_const);
+	RTE_SET_USED(n);
+	RTE_SET_USED(free);
+#endif
+}
+
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 static void
 mempool_obj_audit(struct rte_mempool *mp, __rte_unused void *opaque,
@@ -1320,6 +1350,7 @@ void
 rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 {
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	struct rte_mempool_info info;
 	struct rte_mempool_debug_stats sum;
 	unsigned lcore_id;
 #endif
@@ -1361,6 +1392,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 
 	/* sum and dump statistics */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	rte_mempool_ops_get_info(mp, &info);
 	memset(&sum, 0, sizeof(sum));
 	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
 		sum.put_bulk += mp->stats[lcore_id].put_bulk;
@@ -1369,6 +1401,8 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 		sum.get_success_objs += mp->stats[lcore_id].get_success_objs;
 		sum.get_fail_bulk += mp->stats[lcore_id].get_fail_bulk;
 		sum.get_fail_objs += mp->stats[lcore_id].get_fail_objs;
+		sum.get_success_blks += mp->stats[lcore_id].get_success_blks;
+		sum.get_fail_blks += mp->stats[lcore_id].get_fail_blks;
 	}
 	fprintf(f, "  stats:\n");
 	fprintf(f, "    put_bulk=%"PRIu64"\n", sum.put_bulk);
@@ -1377,6 +1411,11 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 	fprintf(f, "    get_success_objs=%"PRIu64"\n", sum.get_success_objs);
 	fprintf(f, "    get_fail_bulk=%"PRIu64"\n", sum.get_fail_bulk);
 	fprintf(f, "    get_fail_objs=%"PRIu64"\n", sum.get_fail_objs);
+	if (info.contig_block_size > 0) {
+		fprintf(f, "    get_success_blks=%"PRIu64"\n",
+			sum.get_success_blks);
+		fprintf(f, "    get_fail_blks=%"PRIu64"\n", sum.get_fail_blks);
+	}
 #else
 	fprintf(f, "  no statistics available\n");
 #endif
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 1ac2f57..3cab3a0 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -70,6 +70,10 @@ struct rte_mempool_debug_stats {
 	uint64_t get_success_objs; /**< Objects successfully allocated. */
 	uint64_t get_fail_bulk;    /**< Failed allocation number. */
 	uint64_t get_fail_objs;    /**< Objects that failed to be allocated. */
+	/** Successful allocation number of contiguous blocks. */
+	uint64_t get_success_blks;
+	/** Failed allocation number of contiguous blocks. */
+	uint64_t get_fail_blks;
 } __rte_cache_aligned;
 #endif
 
@@ -195,7 +199,10 @@ struct rte_mempool_memhdr {
  *
  * Additional information about the mempool
  */
-struct rte_mempool_info;
+struct rte_mempool_info {
+	/** Number of objects in the contiguous block */
+	unsigned int contig_block_size;
+};
 
 /**
  * The RTE mempool structure.
@@ -273,8 +280,16 @@ struct rte_mempool {
 			mp->stats[__lcore_id].name##_bulk += 1;	\
 		}                                               \
 	} while(0)
+#define __MEMPOOL_CONTIG_BLOCKS_STAT_ADD(mp, name, n) do {                    \
+		unsigned int __lcore_id = rte_lcore_id();       \
+		if (__lcore_id < RTE_MAX_LCORE) {               \
+			mp->stats[__lcore_id].name##_blks += n;	\
+			mp->stats[__lcore_id].name##_bulk += 1;	\
+		}                                               \
+	} while (0)
 #else
 #define __MEMPOOL_STAT_ADD(mp, name, n) do {} while(0)
+#define __MEMPOOL_CONTIG_BLOCKS_STAT_ADD(mp, name, n) do {} while (0)
 #endif
 
 /**
@@ -342,6 +357,38 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 #define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
 #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * @internal Check contiguous object blocks and update cookies or panic.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param first_obj_table_const
+ *   Pointer to a table of void * pointers (first object of the contiguous
+ *   object blocks).
+ * @param n
+ *   Number of contiguous object blocks.
+ * @param free
+ *   - 0: object is supposed to be allocated, mark it as free
+ *   - 1: object is supposed to be free, mark it as allocated
+ *   - 2: just check that cookie is valid (free or allocated)
+ */
+void rte_mempool_contig_blocks_check_cookies(const struct rte_mempool *mp,
+	void * const *first_obj_table_const, unsigned int n, int free);
+
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+#define __mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \
+					      free) \
+	rte_mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \
+						free)
+#else
+#define __mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \
+					      free) \
+	do {} while (0)
+#endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
+
 #define RTE_MEMPOOL_OPS_NAMESIZE 32 /**< Max length of ops struct name. */
 
 /**
@@ -374,6 +421,15 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
 		void **obj_table, unsigned int n);
 
 /**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Dequeue a number of contiquous object blocks from the external pool.
+ */
+typedef int (*rte_mempool_dequeue_contig_blocks_t)(struct rte_mempool *mp,
+		 void **first_obj_table, unsigned int n);
+
+/**
  * Return the number of available objects in the external pool.
  */
 typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
@@ -539,6 +595,10 @@ struct rte_mempool_ops {
 	 * Get mempool info
 	 */
 	rte_mempool_get_info_t get_info;
+	/**
+	 * Dequeue a number of contiguous object blocks.
+	 */
+	rte_mempool_dequeue_contig_blocks_t dequeue_contig_blocks;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -617,6 +677,30 @@ rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
 }
 
 /**
+ * @internal Wrapper for mempool_ops dequeue_contig_blocks callback.
+ *
+ * @param[in] mp
+ *   Pointer to the memory pool.
+ * @param[out] first_obj_table
+ *   Pointer to a table of void * pointers (first objects).
+ * @param[in] n
+ *   Number of blocks to get.
+ * @return
+ *   - 0: Success; got n objects.
+ *   - <0: Error; code of dequeue function.
+ */
+static inline int
+rte_mempool_ops_dequeue_contig_blocks(struct rte_mempool *mp,
+		void **first_obj_table, unsigned int n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+	RTE_ASSERT(ops->dequeue_contig_blocks != NULL);
+	return ops->dequeue_contig_blocks(mp, first_obj_table, n);
+}
+
+/**
  * @internal wrapper for mempool_ops enqueue callback.
  *
  * @param mp
@@ -1531,6 +1615,71 @@ rte_mempool_get(struct rte_mempool *mp, void **obj_p)
 }
 
 /**
+ * @internal Get contiguous blocks of objects from the pool. Used internally.
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param first_obj_table
+ *   A pointer to a pointer to the first object in each block.
+ * @param n
+ *   A number of blocks to get.
+ * @return
+ *   - >0: Success
+ *   - <0: Error
+ */
+static __rte_always_inline int
+__mempool_generic_get_contig_blocks(struct rte_mempool *mp,
+				    void **first_obj_table, unsigned int n)
+{
+	int ret;
+
+	ret = rte_mempool_ops_dequeue_contig_blocks(mp, first_obj_table, n);
+	if (ret < 0)
+		__MEMPOOL_CONTIG_BLOCKS_STAT_ADD(mp, get_fail, n);
+	else
+		__MEMPOOL_CONTIG_BLOCKS_STAT_ADD(mp, get_success, n);
+
+	return ret;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Get a contiguous blocks of objects from the mempool.
+ *
+ * If cache is enabled, consider to flush it first, to reuse objects
+ * as soon as possible.
+ *
+ * The application should check that the driver supports the operation
+ * by calling rte_mempool_ops_get_info() and checking that `contig_block_size`
+ * is not zero.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param first_obj_table
+ *   A pointer to a pointer to the first object in each block.
+ * @param n
+ *   The number of blocks to get from mempool.
+ * @return
+ *   - 0: Success; blocks taken.
+ *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
+ *   - -EOPNOTSUPP: The mempool driver does not support block dequeue
+ */
+static __rte_always_inline int
+__rte_experimental
+rte_mempool_get_contig_blocks(struct rte_mempool *mp,
+			      void **first_obj_table, unsigned int n)
+{
+	int ret;
+
+	ret = __mempool_generic_get_contig_blocks(mp, first_obj_table, n);
+	if (ret == 0)
+		__mempool_contig_blocks_check_cookies(mp, first_obj_table, n,
+						      1);
+	return ret;
+}
+
+/**
  * Return the number of entries in the mempool.
  *
  * When cache is enabled, this function has to browse the length of
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index efc1c08..a27e1fa 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -60,6 +60,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->calc_mem_size = h->calc_mem_size;
 	ops->populate = h->populate;
 	ops->get_info = h->get_info;
+	ops->dequeue_contig_blocks = h->dequeue_contig_blocks;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index c9d16ec..1c406b5 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -53,6 +53,7 @@ DPDK_17.11 {
 DPDK_18.05 {
 	global:
 
+	rte_mempool_contig_blocks_check_cookies;
 	rte_mempool_op_calc_mem_size_default;
 	rte_mempool_op_populate_default;
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v2 4/6] mempool/bucket: implement block dequeue operation
  2018-04-16 13:33 ` [PATCH v2 0/6] mempool: " Andrew Rybchenko
                     ` (2 preceding siblings ...)
  2018-04-16 13:33   ` [PATCH v2 3/6] mempool: support block dequeue operation Andrew Rybchenko
@ 2018-04-16 13:33   ` Andrew Rybchenko
  2018-04-16 13:33   ` [PATCH v2 5/6] mempool/bucket: do not allow one lcore to grab all buckets Andrew Rybchenko
                     ` (2 subsequent siblings)
  6 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-16 13:33 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/mempool/bucket/rte_mempool_bucket.c | 52 +++++++++++++++++++++++++++++
 1 file changed, 52 insertions(+)

diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c
index 5a1bd79..0365671 100644
--- a/drivers/mempool/bucket/rte_mempool_bucket.c
+++ b/drivers/mempool/bucket/rte_mempool_bucket.c
@@ -294,6 +294,46 @@ bucket_dequeue(struct rte_mempool *mp, void **obj_table, unsigned int n)
 	return rc;
 }
 
+static int
+bucket_dequeue_contig_blocks(struct rte_mempool *mp, void **first_obj_table,
+			     unsigned int n)
+{
+	struct bucket_data *bd = mp->pool_data;
+	const uint32_t header_size = bd->header_size;
+	struct bucket_stack *cur_stack = bd->buckets[rte_lcore_id()];
+	unsigned int n_buckets_from_stack = RTE_MIN(n, cur_stack->top);
+	struct bucket_header *hdr;
+	void **first_objp = first_obj_table;
+
+	bucket_adopt_orphans(bd);
+
+	n -= n_buckets_from_stack;
+	while (n_buckets_from_stack-- > 0) {
+		hdr = bucket_stack_pop_unsafe(cur_stack);
+		*first_objp++ = (uint8_t *)hdr + header_size;
+	}
+	if (n > 0) {
+		if (unlikely(rte_ring_dequeue_bulk(bd->shared_bucket_ring,
+						   first_objp, n, NULL) != n)) {
+			/* Return the already dequeued buckets */
+			while (first_objp-- != first_obj_table) {
+				bucket_stack_push(cur_stack,
+						  (uint8_t *)*first_objp -
+						  header_size);
+			}
+			rte_errno = ENOBUFS;
+			return -rte_errno;
+		}
+		while (n-- > 0) {
+			hdr = (struct bucket_header *)*first_objp;
+			hdr->lcore_id = rte_lcore_id();
+			*first_objp++ = (uint8_t *)hdr + header_size;
+		}
+	}
+
+	return 0;
+}
+
 static void
 count_underfilled_buckets(struct rte_mempool *mp,
 			  void *opaque,
@@ -547,6 +587,16 @@ bucket_populate(struct rte_mempool *mp, unsigned int max_objs,
 	return n_objs;
 }
 
+static int
+bucket_get_info(const struct rte_mempool *mp, struct rte_mempool_info *info)
+{
+	struct bucket_data *bd = mp->pool_data;
+
+	info->contig_block_size = bd->obj_per_bucket;
+	return 0;
+}
+
+
 static const struct rte_mempool_ops ops_bucket = {
 	.name = "bucket",
 	.alloc = bucket_alloc,
@@ -556,6 +606,8 @@ static const struct rte_mempool_ops ops_bucket = {
 	.get_count = bucket_get_count,
 	.calc_mem_size = bucket_calc_mem_size,
 	.populate = bucket_populate,
+	.get_info = bucket_get_info,
+	.dequeue_contig_blocks = bucket_dequeue_contig_blocks,
 };
 
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v2 5/6] mempool/bucket: do not allow one lcore to grab all buckets
  2018-04-16 13:33 ` [PATCH v2 0/6] mempool: " Andrew Rybchenko
                     ` (3 preceding siblings ...)
  2018-04-16 13:33   ` [PATCH v2 4/6] mempool/bucket: implement " Andrew Rybchenko
@ 2018-04-16 13:33   ` Andrew Rybchenko
  2018-04-16 13:33   ` [PATCH v2 6/6] doc: advertise bucket mempool driver Andrew Rybchenko
  2018-04-24 23:00   ` [PATCH v2 0/6] mempool: add bucket driver Thomas Monjalon
  6 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-16 13:33 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/mempool/bucket/rte_mempool_bucket.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c
index 0365671..6c2da1c 100644
--- a/drivers/mempool/bucket/rte_mempool_bucket.c
+++ b/drivers/mempool/bucket/rte_mempool_bucket.c
@@ -42,6 +42,7 @@ struct bucket_data {
 	unsigned int header_size;
 	unsigned int total_elt_size;
 	unsigned int obj_per_bucket;
+	unsigned int bucket_stack_thresh;
 	uintptr_t bucket_page_mask;
 	struct rte_ring *shared_bucket_ring;
 	struct bucket_stack *buckets[RTE_MAX_LCORE];
@@ -139,6 +140,7 @@ bucket_enqueue(struct rte_mempool *mp, void * const *obj_table,
 	       unsigned int n)
 {
 	struct bucket_data *bd = mp->pool_data;
+	struct bucket_stack *local_stack = bd->buckets[rte_lcore_id()];
 	unsigned int i;
 	int rc = 0;
 
@@ -146,6 +148,15 @@ bucket_enqueue(struct rte_mempool *mp, void * const *obj_table,
 		rc = bucket_enqueue_single(bd, obj_table[i]);
 		RTE_ASSERT(rc == 0);
 	}
+	if (local_stack->top > bd->bucket_stack_thresh) {
+		rte_ring_enqueue_bulk(bd->shared_bucket_ring,
+				      &local_stack->objects
+				      [bd->bucket_stack_thresh],
+				      local_stack->top -
+				      bd->bucket_stack_thresh,
+				      NULL);
+	    local_stack->top = bd->bucket_stack_thresh;
+	}
 	return rc;
 }
 
@@ -408,6 +419,8 @@ bucket_alloc(struct rte_mempool *mp)
 	bd->obj_per_bucket = (bd->bucket_mem_size - bucket_header_size) /
 		bd->total_elt_size;
 	bd->bucket_page_mask = ~(rte_align64pow2(bd->bucket_mem_size) - 1);
+	/* eventually this should be a tunable parameter */
+	bd->bucket_stack_thresh = (mp->size / bd->obj_per_bucket) * 4 / 3;
 
 	if (mp->flags & MEMPOOL_F_SP_PUT)
 		rg_flags |= RING_F_SP_ENQ;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v2 6/6] doc: advertise bucket mempool driver
  2018-04-16 13:33 ` [PATCH v2 0/6] mempool: " Andrew Rybchenko
                     ` (4 preceding siblings ...)
  2018-04-16 13:33   ` [PATCH v2 5/6] mempool/bucket: do not allow one lcore to grab all buckets Andrew Rybchenko
@ 2018-04-16 13:33   ` Andrew Rybchenko
  2018-04-24 23:00   ` [PATCH v2 0/6] mempool: add bucket driver Thomas Monjalon
  6 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-16 13:33 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 doc/guides/rel_notes/release_18_05.rst | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index de6ddc3..082eb05 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -115,6 +115,15 @@ New Features
 
   Linux uevent is supported as backend of this device event notification framework.
 
+* **Added bucket mempool driver.**
+
+  Added bucket mempool driver which provide a way to allocate contiguous
+  block of objects.
+  Number of objects in the block depends on how many objects fit in
+  RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB memory chunk which is build time option.
+  The number may be obtained using rte_mempool_ops_get_info() API.
+  Contiguous blocks may be allocated using rte_mempool_get_contig_blocks() API.
+
 
 API Changes
 -----------
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* Re: [PATCH v4 04/11] mempool: add op to calculate memory size to be allocated
  2018-04-16 13:24   ` [PATCH v4 04/11] mempool: add op to calculate memory size to be allocated Andrew Rybchenko
@ 2018-04-16 15:33     ` Olivier Matz
  2018-04-16 15:41       ` Andrew Rybchenko
  2018-04-17 10:23     ` Burakov, Anatoly
  1 sibling, 1 reply; 197+ messages in thread
From: Olivier Matz @ 2018-04-16 15:33 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev, Anatoly Burakov

On Mon, Apr 16, 2018 at 02:24:33PM +0100, Andrew Rybchenko wrote:
> Size of memory chunk required to populate mempool objects depends
> on how objects are stored in the memory. Different mempool drivers
> may have different requirements and a new operation allows to
> calculate memory size in accordance with driver requirements and
> advertise requirements on minimum memory chunk size and alignment
> in a generic way.
> 
> Bump ABI version since the patch breaks it.
> 
> Suggested-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>

[...]

> @@ -643,39 +633,35 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>  	 * 1G page on a 10MB memzone). If we fail to get enough contiguous
>  	 * memory, then we'll go and reserve space page-by-page.
>  	 */
> -	no_pageshift = no_contig || force_contig ||
> -			rte_eal_iova_mode() == RTE_IOVA_VA;
> +	no_pageshift = no_contig || rte_eal_iova_mode() == RTE_IOVA_VA;
>  	try_contig = !no_contig && !no_pageshift && rte_eal_has_hugepages();

In case there is a v5 for another reason, I think the last line is
equivalent to:

  try_contig = !no_pageshift && rte_eal_has_hugepages();


Otherwise:
Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v4 04/11] mempool: add op to calculate memory size to be allocated
  2018-04-16 15:33     ` Olivier Matz
@ 2018-04-16 15:41       ` Andrew Rybchenko
  0 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-16 15:41 UTC (permalink / raw)
  To: Olivier Matz; +Cc: dev, Anatoly Burakov

On 04/16/2018 06:33 PM, Olivier Matz wrote:
> On Mon, Apr 16, 2018 at 02:24:33PM +0100, Andrew Rybchenko wrote:
>> Size of memory chunk required to populate mempool objects depends
>> on how objects are stored in the memory. Different mempool drivers
>> may have different requirements and a new operation allows to
>> calculate memory size in accordance with driver requirements and
>> advertise requirements on minimum memory chunk size and alignment
>> in a generic way.
>>
>> Bump ABI version since the patch breaks it.
>>
>> Suggested-by: Olivier Matz <olivier.matz@6wind.com>
>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> [...]
>
>> @@ -643,39 +633,35 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>>   	 * 1G page on a 10MB memzone). If we fail to get enough contiguous
>>   	 * memory, then we'll go and reserve space page-by-page.
>>   	 */
>> -	no_pageshift = no_contig || force_contig ||
>> -			rte_eal_iova_mode() == RTE_IOVA_VA;
>> +	no_pageshift = no_contig || rte_eal_iova_mode() == RTE_IOVA_VA;
>>   	try_contig = !no_contig && !no_pageshift && rte_eal_has_hugepages();
> In case there is a v5 for another reason, I think the last line is
> equivalent to:
>
>    try_contig = !no_pageshift && rte_eal_has_hugepages();

Agree. As I understand it is true before my patch as well.

> Otherwise:
> Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v4 04/11] mempool: add op to calculate memory size to be allocated
  2018-04-16 13:24   ` [PATCH v4 04/11] mempool: add op to calculate memory size to be allocated Andrew Rybchenko
  2018-04-16 15:33     ` Olivier Matz
@ 2018-04-17 10:23     ` Burakov, Anatoly
  1 sibling, 0 replies; 197+ messages in thread
From: Burakov, Anatoly @ 2018-04-17 10:23 UTC (permalink / raw)
  To: Andrew Rybchenko, dev; +Cc: Olivier MATZ

On 16-Apr-18 2:24 PM, Andrew Rybchenko wrote:
> Size of memory chunk required to populate mempool objects depends
> on how objects are stored in the memory. Different mempool drivers
> may have different requirements and a new operation allows to
> calculate memory size in accordance with driver requirements and
> advertise requirements on minimum memory chunk size and alignment
> in a generic way.
> 
> Bump ABI version since the patch breaks it.
> 
> Suggested-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> ---
> v3 -> v4:
>   - rebased on top of memory rework
>   - dropped previous Ack's since rebase is not trivial
>   - check size calculation failure in rte_mempool_populate_anon() and
>     rte_mempool_memchunk_anon_free()
> 
> v2 -> v3:
>   - none
> 
> v1 -> v2:
>   - clarify min_chunk_size meaning
>   - rebase on top of patch series which fixes library version in meson
>     build
> 
> RFCv2 -> v1:
>   - move default calc_mem_size callback to rte_mempool_ops_default.c
>   - add ABI changes to release notes
>   - name default callback consistently: rte_mempool_op_<callback>_default()
>   - bump ABI version since it is the first patch which breaks ABI
>   - describe default callback behaviour in details
>   - avoid introduction of internal function to cope with deprecation
>     (keep it to deprecation patch)
>   - move cache-line or page boundary chunk alignment to default callback
>   - highlight that min_chunk_size and align parameters are output only
> 
>   doc/guides/rel_notes/deprecation.rst         |   3 +-
>   doc/guides/rel_notes/release_18_05.rst       |   8 +-
>   lib/librte_mempool/Makefile                  |   3 +-
>   lib/librte_mempool/meson.build               |   5 +-
>   lib/librte_mempool/rte_mempool.c             | 114 +++++++++++++++------------
>   lib/librte_mempool/rte_mempool.h             |  86 +++++++++++++++++++-
>   lib/librte_mempool/rte_mempool_ops.c         |  18 +++++
>   lib/librte_mempool/rte_mempool_ops_default.c |  38 +++++++++
>   lib/librte_mempool/rte_mempool_version.map   |   7 ++
>   9 files changed, 225 insertions(+), 57 deletions(-)
>   create mode 100644 lib/librte_mempool/rte_mempool_ops_default.c

<...>

> -	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
>   	for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) {
> +		size_t min_chunk_size;
>   		unsigned int flags;
> +
>   		if (try_contig || no_pageshift)
> -			size = rte_mempool_xmem_size(n, total_elt_sz, 0,
> -				mp->flags);
> +			mem_size = rte_mempool_ops_calc_mem_size(mp, n,
> +					0, &min_chunk_size, &align);
>   		else
> -			size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift,
> -				mp->flags);
> +			mem_size = rte_mempool_ops_calc_mem_size(mp, n,
> +					pg_shift, &min_chunk_size, &align);
> +
> +		if (mem_size < 0) {
> +			ret = mem_size;
> +			goto fail;
> +		}
>   
>   		ret = snprintf(mz_name, sizeof(mz_name),
>   			RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id);
> @@ -692,27 +678,31 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>   		if (try_contig)
>   			flags |= RTE_MEMZONE_IOVA_CONTIG;
>   
> -		mz = rte_memzone_reserve_aligned(mz_name, size, mp->socket_id,
> -				flags, align);
> +		mz = rte_memzone_reserve_aligned(mz_name, mem_size,
> +				mp->socket_id, flags, align);
>   
> -		/* if we were trying to allocate contiguous memory, adjust
> -		 * memzone size and page size to fit smaller page sizes, and
> -		 * try again.
> +		/* if we were trying to allocate contiguous memory, failed and
> +		 * minimum required contiguous chunk fits minimum page, adjust
> +		 * memzone size to the page size, and try again.
>   		 */
> -		if (mz == NULL && try_contig) {
> +		if (mz == NULL && try_contig && min_chunk_size <= pg_sz) {

This is a bit pessimistic. There may not have been enough 
IOVA-contiguous memory to reserve `mem_size`, but there may be enough 
contiguous memory to try and reserve `min_chunk_size` contiguous memory 
if it's bigger than minimum page size. This is *minimum* page size, so 
there may be bigger pages, and ideally if (min_chunk_size >= pg_sz) && 
(min_chunk_size < mem_size), we might've tried to allocate some 
IOVA-contiguous memory, and succeed.

However, that's not a huge issue, so...

Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>

>   			try_contig = false;
>   			flags &= ~RTE_MEMZONE_IOVA_CONTIG;
> -			align = pg_sz;
> -			size = rte_mempool_xmem_size(n, total_elt_sz,
> -				pg_shift, mp->flags);
>   
> -			mz = rte_memzone_reserve_aligned(mz_name, size,
> +			mem_size = rte_mempool_ops_calc_mem_size(mp, n,
> +					pg_shift, &min_chunk_size, &align);
> +			if (mem_size < 0) {
> +				ret = mem_size;
> +				goto fail;
> +			}
> +
> +			mz = rte_memzone_reserve_aligned(mz_name, mem_size,
>   				mp->socket_id, flags, align);
>   		}

<...>
-- 
Thanks,
Anatoly

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 0/6] mempool: add bucket driver
  2018-03-26 16:12   ` [PATCH v1 0/6] mempool: add bucket driver Andrew Rybchenko
                       ` (5 preceding siblings ...)
  2018-03-26 16:12     ` [PATCH v1 6/6] doc: advertise bucket mempool driver Andrew Rybchenko
@ 2018-04-19 16:41     ` Olivier Matz
  6 siblings, 0 replies; 197+ messages in thread
From: Olivier Matz @ 2018-04-19 16:41 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev

Hi Andrew,

Sorry for the late feedback, few comments below.

On Mon, Mar 26, 2018 at 05:12:53PM +0100, Andrew Rybchenko wrote:
> The initial patch series [1] (RFCv1 is [2]) is split into two to simplify
> processing.  It is the second part which relies on the first one [3].
> 
> It should be applied on top of [4] and [3].
> 
> The patch series adds bucket mempool driver which allows to allocate
> (both physically and virtually) contiguous blocks of objects and adds
> mempool API to do it. It is still capable to provide separate objects,
> but it is definitely more heavy-weight than ring/stack drivers.
> The driver will be used by the future Solarflare driver enhancements
> which allow to utilize physical contiguous blocks in the NIC firmware.
> 
> The target usecase is dequeue in blocks and enqueue separate objects
> back (which are collected in buckets to be dequeued). So, the memory
> pool with bucket driver is created by an application and provided to
> networking PMD receive queue. The choice of bucket driver is done using
> rte_eth_dev_pool_ops_supported(). A PMD that relies upon contiguous
> block allocation should report the bucket driver as the only supported
> and preferred one.
> 
> Introduction of the contiguous block dequeue operation is proven by
> performance measurements using autotest with minor enhancements:
>  - in the original test bulks are powers of two, which is unacceptable
>    for us, so they are changed to multiple of contig_block_size;
>  - the test code is duplicated to support plain dequeue and
>    dequeue_contig_blocks;
>  - all the extra test variations (with/without cache etc) are eliminated;
>  - a fake read from the dequeued buffer is added (in both cases) to
>    simulate mbufs access.
> 
> start performance test for bucket (without cache)
> mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Srate_persec=   111935488
> mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Srate_persec=   115290931
> mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Srate_persec=   353055539
> mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Srate_persec=   353330790
> mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Srate_persec=   224657407
> mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Srate_persec=   230411468
> mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Srate_persec=   706700902
> mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Srate_persec=   703673139
> mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Srate_persec=   425236887
> mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Srate_persec=   437295512
> mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Srate_persec=  1343409356
> mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Srate_persec=  1336567397
> start performance test for bucket (without cache + contiguous dequeue)
> mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Crate_persec=   122945536
> mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Crate_persec=   126458265
> mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Crate_persec=   374262988
> mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Crate_persec=   377316966
> mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Crate_persec=   244842496
> mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Crate_persec=   251618917
> mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Crate_persec=   751226060
> mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Crate_persec=   756233010
> mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Crate_persec=   462068120
> mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Crate_persec=   476997221
> mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Crate_persec=  1432171313
> mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Crate_persec=  1438829771
> 
> The number of objects in the contiguous block is a function of bucket
> memory size (.config option) and total element size. In the future
> additional API with possibility to pass parameters on mempool allocation
> may be added.
> 
> It breaks ABI since changes rte_mempool_ops. The ABI version is already
> bumped in [4].
> 
> 
> [1] https://dpdk.org/ml/archives/dev/2018-January/088698.html
> [2] https://dpdk.org/ml/archives/dev/2017-November/082335.html
> [3] https://dpdk.org/ml/archives/dev/2018-March/093807.html
> [4] https://dpdk.org/ml/archives/dev/2018-March/093196.html


As discussed privately, at first glance I was a bit reticent to
introduce a new API in mempool that will only be available in one
mempool driver.

There have been the same debate for several features of ethdev: should
we provide a generic API for a feature available in only one driver? Or
should the driver provide its own API?

Given that the mempool driver API is not that big currently, and that it
can bring a performance enhancement (which is the primary goal of DPDK),
I think we can give a chance to this patchset. Drivers that want to use
this new bucket driver can take advantage of the new API, keeping a
fallback mode to still be working with other mempool drivers.

The bet is:

- drivers and aplication try the bucket driver and its new API, and
  they notice a performance increase
- the get_contig_block API is implemented in some other drivers, if
  possible (not easy for default one at least)
- the bucket driver could become the default driver, if the performance
  increase is significant and wide.

By integrating this patchset, I hope we can also have some feedback
about the performance of this driver in other situations. My worries are
about pipeline+multicore use-cases, where it may add some pressure
(races conditions?) on the bucket header.

Finally, I think (as discussed privately) that the tests should be
updated to be able to reproduce the tests in this cover letter. I just
did a quick test by replacing "stack" by "bucket" in autotest (see patch
below) and it fails in populate(). I did not investigate more, maybe the
parameters are not correct for bucket.

 --- a/test/test/test_mempool.c
 +++ b/test/test/test_mempool.c
 @@ -498,7 +498,7 @@ test_mempool(void)
                 printf("cannot allocate mp_stack mempool\n");
                 goto err;
         }
 -       if (rte_mempool_set_ops_byname(mp_stack, "stack", NULL) < 0) {
 +       if (rte_mempool_set_ops_byname(mp_stack, "bucket", NULL) < 0) {
                 printf("cannot set stack handler\n");
                 goto err;
         }



Thank you for this work, hope we'll be on time for rc1.
Olivier

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 3/6] mempool: support block dequeue operation
  2018-03-26 16:12     ` [PATCH v1 3/6] mempool: support block dequeue operation Andrew Rybchenko
@ 2018-04-19 16:41       ` Olivier Matz
  2018-04-25  9:49         ` Andrew Rybchenko
  0 siblings, 1 reply; 197+ messages in thread
From: Olivier Matz @ 2018-04-19 16:41 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev, Artem V. Andreev

On Mon, Mar 26, 2018 at 05:12:56PM +0100, Andrew Rybchenko wrote:
> From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>
> 
> If mempool manager supports object blocks (physically and virtual
> contiguous set of objects), it is sufficient to get the first
> object only and the function allows to avoid filling in of
> information about each block member.
> 
> Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>

Minor things here, please see below.

[...]

> @@ -1531,6 +1615,71 @@ rte_mempool_get(struct rte_mempool *mp, void **obj_p)
>  }
>  
>  /**
> + * @internal Get contiguous blocks of objects from the pool. Used internally.
> + * @param mp
> + *   A pointer to the mempool structure.
> + * @param first_obj_table
> + *   A pointer to a pointer to the first object in each block.
> + * @param n
> + *   A number of blocks to get.
> + * @return
> + *   - >0: Success
> + *   - <0: Error

I guess it is 0 here, not >0.


> + */
> +static __rte_always_inline int
> +__mempool_generic_get_contig_blocks(struct rte_mempool *mp,
> +				    void **first_obj_table, unsigned int n)
> +{
> +	int ret;
> +
> +	ret = rte_mempool_ops_dequeue_contig_blocks(mp, first_obj_table, n);
> +	if (ret < 0)
> +		__MEMPOOL_CONTIG_BLOCKS_STAT_ADD(mp, get_fail, n);
> +	else
> +		__MEMPOOL_CONTIG_BLOCKS_STAT_ADD(mp, get_success, n);
> +
> +	return ret;
> +}
> +

Is it worth having this function?
I think it would be simple to include the code in
rte_mempool_get_contig_blocks() below... or am I missing something?


> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Get a contiguous blocks of objects from the mempool.
> + *
> + * If cache is enabled, consider to flush it first, to reuse objects
> + * as soon as possible.
> + *
> + * The application should check that the driver supports the operation
> + * by calling rte_mempool_ops_get_info() and checking that `contig_block_size`
> + * is not zero.
> + *
> + * @param mp
> + *   A pointer to the mempool structure.
> + * @param first_obj_table
> + *   A pointer to a pointer to the first object in each block.
> + * @param n
> + *   The number of blocks to get from mempool.
> + * @return
> + *   - 0: Success; blocks taken.
> + *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
> + *   - -EOPNOTSUPP: The mempool driver does not support block dequeue
> + */
> +static __rte_always_inline int
> +__rte_experimental
> +rte_mempool_get_contig_blocks(struct rte_mempool *mp,
> +			      void **first_obj_table, unsigned int n)
> +{
> +	int ret;
> +
> +	ret = __mempool_generic_get_contig_blocks(mp, first_obj_table, n);
> +	if (ret == 0)
> +		__mempool_contig_blocks_check_cookies(mp, first_obj_table, n,
> +						      1);
> +	return ret;
> +}

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 2/6] mempool: implement abstract mempool info API
  2018-03-26 16:12     ` [PATCH v1 2/6] mempool: implement abstract mempool info API Andrew Rybchenko
@ 2018-04-19 16:42       ` Olivier Matz
  2018-04-25  9:57         ` Andrew Rybchenko
  0 siblings, 1 reply; 197+ messages in thread
From: Olivier Matz @ 2018-04-19 16:42 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev, Artem V. Andreev

On Mon, Mar 26, 2018 at 05:12:55PM +0100, Andrew Rybchenko wrote:
> From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>
> 
> Primarily, it is intended as a way for the mempool driver to provide
> additional information on how it lays up objects inside the mempool.
> 
> Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>

I think it's a good idea to have a way to query mempool features
or parameters. The approach chosen in this patch looks similar
to what we have with ethdev devinfo, right?

[...]

>  /**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Additional information about the mempool
> + */
> +struct rte_mempool_info;
> +

[...]

> +/* wrapper to get additional mempool info */
> +int
> +rte_mempool_ops_get_info(const struct rte_mempool *mp,
> +			 struct rte_mempool_info *info)
> +{
> +	struct rte_mempool_ops *ops;
> +
> +	ops = rte_mempool_get_ops(mp->ops_index);
> +
> +	RTE_FUNC_PTR_OR_ERR_RET(ops->get_info, -ENOTSUP);
> +	return ops->get_info(mp, info);
> +}

Thinking in terms of ABI compatibility, it looks that each time we will
add or remove a field, it will break the ABI because the info structure
will change.

Well, it's maybe nitpicking, because most of the time adding a field in
info structure goes with adding a field in the mempool struct, which
will anyway break the ABI.

Another approach is to have a function
rte_mempool_info_contig_block_size(mp) whose ABI can be more easily
wrapped with VERSION_SYMBOL().

On my side I'm fine with your current approach, especially given how few
usages of VERSION_SYMBOL() we can find in DPDK. But in case you feel
it's better to have a function...

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 6/6] doc: advertise bucket mempool driver
  2018-03-26 16:12     ` [PATCH v1 6/6] doc: advertise bucket mempool driver Andrew Rybchenko
@ 2018-04-19 16:43       ` Olivier Matz
  0 siblings, 0 replies; 197+ messages in thread
From: Olivier Matz @ 2018-04-19 16:43 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev

On Mon, Mar 26, 2018 at 05:12:59PM +0100, Andrew Rybchenko wrote:
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> ---
>  doc/guides/rel_notes/release_18_05.rst | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
> index 016c4ed..c578364 100644
> --- a/doc/guides/rel_notes/release_18_05.rst
> +++ b/doc/guides/rel_notes/release_18_05.rst
> @@ -52,6 +52,15 @@ New Features
>    * Added support for NVGRE, VXLAN and GENEVE filters in flow API.
>    * Added support for DROP action in flow API.
>  
> +* **Added bucket mempool driver.**
> +
> +  Added bucket mempool driver which provide a way to allocate contiguous

provides :)


> +  block of objects.
> +  Number of objects in the block depends on how many objects fit in
> +  RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB memory chunk which is build time option.
> +  The number may be obtained using rte_mempool_ops_get_info() API.
> +  Contiguous blocks may be allocated using rte_mempool_get_contig_blocks() API.
> +

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v4 00/11] mempool: prepare to add bucket driver
  2018-04-16 13:24 ` [PATCH v4 00/11] mempool: prepare to " Andrew Rybchenko
                     ` (10 preceding siblings ...)
  2018-04-16 13:24   ` [PATCH v4 11/11] mempool: support flushing the default cache of the mempool Andrew Rybchenko
@ 2018-04-24  0:20   ` Thomas Monjalon
  11 siblings, 0 replies; 197+ messages in thread
From: Thomas Monjalon @ 2018-04-24  0:20 UTC (permalink / raw)
  To: Andrew Rybchenko
  Cc: dev, Olivier MATZ, Anatoly Burakov, Santosh Shukla, Jerin Jacob,
	Hemant Agrawal, Shreyansh Jain

16/04/2018 15:24, Andrew Rybchenko:
> The initial patch series [1] is split into two to simplify processing.
> The second series relies on this one and will add bucket mempool driver
> and related ops.
[...]
> Andrew Rybchenko (9):
>   mempool: fix memhdr leak when no objects are populated
>   mempool: rename flag to control IOVA-contiguous objects
>   mempool: add op to calculate memory size to be allocated
>   mempool: add op to populate objects using provided memory
>   mempool: remove callback to get capabilities
>   mempool: deprecate xmem functions
>   mempool/octeontx: prepare to remove register memory area op
>   mempool/dpaa: prepare to remove register memory area op
>   mempool: remove callback to register memory area
> 
> Artem V. Andreev (2):
>   mempool: ensure the mempool is initialized before populating
>   mempool: support flushing the default cache of the mempool

Series applied (with formatting in release notes), thanks.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v2 0/6] mempool: add bucket driver
  2018-04-16 13:33 ` [PATCH v2 0/6] mempool: " Andrew Rybchenko
                     ` (5 preceding siblings ...)
  2018-04-16 13:33   ` [PATCH v2 6/6] doc: advertise bucket mempool driver Andrew Rybchenko
@ 2018-04-24 23:00   ` Thomas Monjalon
  2018-04-25  8:43     ` Olivier Matz
  6 siblings, 1 reply; 197+ messages in thread
From: Thomas Monjalon @ 2018-04-24 23:00 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev, Andrew Rybchenko

Can we have this patchset in 18.05-rc1?
Or is it candidate to rc2?

16/04/2018 15:33, Andrew Rybchenko:
> The patch series adds bucket mempool driver which allows to allocate
> (both physically and virtually) contiguous blocks of objects and adds
> mempool API to do it. It is still capable to provide separate objects,
> but it is definitely more heavy-weight than ring/stack drivers.
> The driver will be used by the future Solarflare driver enhancements
> which allow to utilize physical contiguous blocks in the NIC firmware.
> 
> The target usecase is dequeue in blocks and enqueue separate objects
> back (which are collected in buckets to be dequeued). So, the memory
> pool with bucket driver is created by an application and provided to
> networking PMD receive queue. The choice of bucket driver is done using
> rte_eth_dev_pool_ops_supported(). A PMD that relies upon contiguous
> block allocation should report the bucket driver as the only supported
> and preferred one.
[...]
> Andrew Rybchenko (1):
>   doc: advertise bucket mempool driver
> 
> Artem V. Andreev (5):
>   mempool/bucket: implement bucket mempool manager
>   mempool: implement abstract mempool info API
>   mempool: support block dequeue operation
>   mempool/bucket: implement block dequeue operation
>   mempool/bucket: do not allow one lcore to grab all buckets

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v2 0/6] mempool: add bucket driver
  2018-04-24 23:00   ` [PATCH v2 0/6] mempool: add bucket driver Thomas Monjalon
@ 2018-04-25  8:43     ` Olivier Matz
  0 siblings, 0 replies; 197+ messages in thread
From: Olivier Matz @ 2018-04-25  8:43 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, Andrew Rybchenko

Hi,

On Wed, Apr 25, 2018 at 01:00:23AM +0200, Thomas Monjalon wrote:
> Can we have this patchset in 18.05-rc1?
> Or is it candidate to rc2?

I realized I made my comments on v1 instead of v2, sorry.

https://dpdk.org/dev/patchwork/patch/36538/
https://dpdk.org/dev/patchwork/patch/36535/
https://dpdk.org/dev/patchwork/patch/36533/

All of them are minor issues, they could be adressed for rc2.
Will send acks for generic mempool part.

Olivier


> 
> 16/04/2018 15:33, Andrew Rybchenko:
> > The patch series adds bucket mempool driver which allows to allocate
> > (both physically and virtually) contiguous blocks of objects and adds
> > mempool API to do it. It is still capable to provide separate objects,
> > but it is definitely more heavy-weight than ring/stack drivers.
> > The driver will be used by the future Solarflare driver enhancements
> > which allow to utilize physical contiguous blocks in the NIC firmware.
> > 
> > The target usecase is dequeue in blocks and enqueue separate objects
> > back (which are collected in buckets to be dequeued). So, the memory
> > pool with bucket driver is created by an application and provided to
> > networking PMD receive queue. The choice of bucket driver is done using
> > rte_eth_dev_pool_ops_supported(). A PMD that relies upon contiguous
> > block allocation should report the bucket driver as the only supported
> > and preferred one.
> [...]
> > Andrew Rybchenko (1):
> >   doc: advertise bucket mempool driver
> > 
> > Artem V. Andreev (5):
> >   mempool/bucket: implement bucket mempool manager
> >   mempool: implement abstract mempool info API
> >   mempool: support block dequeue operation
> >   mempool/bucket: implement block dequeue operation
> >   mempool/bucket: do not allow one lcore to grab all buckets

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v2 2/6] mempool: implement abstract mempool info API
  2018-04-16 13:33   ` [PATCH v2 2/6] mempool: implement abstract mempool info API Andrew Rybchenko
@ 2018-04-25  8:44     ` Olivier Matz
  0 siblings, 0 replies; 197+ messages in thread
From: Olivier Matz @ 2018-04-25  8:44 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev, Artem V. Andreev

On Mon, Apr 16, 2018 at 02:33:26PM +0100, Andrew Rybchenko wrote:
> From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>
> 
> Primarily, it is intended as a way for the mempool driver to provide
> additional information on how it lays up objects inside the mempool.
> 
> Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>

Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v2 3/6] mempool: support block dequeue operation
  2018-04-16 13:33   ` [PATCH v2 3/6] mempool: support block dequeue operation Andrew Rybchenko
@ 2018-04-25  8:45     ` Olivier Matz
  0 siblings, 0 replies; 197+ messages in thread
From: Olivier Matz @ 2018-04-25  8:45 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev, Artem V. Andreev

On Mon, Apr 16, 2018 at 02:33:27PM +0100, Andrew Rybchenko wrote:
> From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>
> 
> If mempool manager supports object blocks (physically and virtual
> contiguous set of objects), it is sufficient to get the first
> object only and the function allows to avoid filling in of
> information about each block member.
> 
> Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>

Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 3/6] mempool: support block dequeue operation
  2018-04-19 16:41       ` Olivier Matz
@ 2018-04-25  9:49         ` Andrew Rybchenko
  0 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-25  9:49 UTC (permalink / raw)
  To: Olivier Matz; +Cc: dev, Artem V. Andreev

On 04/19/2018 07:41 PM, Olivier Matz wrote:
> On Mon, Mar 26, 2018 at 05:12:56PM +0100, Andrew Rybchenko wrote:
> [...]
>> @@ -1531,6 +1615,71 @@ rte_mempool_get(struct rte_mempool *mp, void **obj_p)
>>   }
>>   
>>   /**
>> + * @internal Get contiguous blocks of objects from the pool. Used internally.
>> + * @param mp
>> + *   A pointer to the mempool structure.
>> + * @param first_obj_table
>> + *   A pointer to a pointer to the first object in each block.
>> + * @param n
>> + *   A number of blocks to get.
>> + * @return
>> + *   - >0: Success
>> + *   - <0: Error
> I guess it is 0 here, not >0.

Yes, thanks.

>> + */
>> +static __rte_always_inline int
>> +__mempool_generic_get_contig_blocks(struct rte_mempool *mp,
>> +				    void **first_obj_table, unsigned int n)
>> +{
>> +	int ret;
>> +
>> +	ret = rte_mempool_ops_dequeue_contig_blocks(mp, first_obj_table, n);
>> +	if (ret < 0)
>> +		__MEMPOOL_CONTIG_BLOCKS_STAT_ADD(mp, get_fail, n);
>> +	else
>> +		__MEMPOOL_CONTIG_BLOCKS_STAT_ADD(mp, get_success, n);
>> +
>> +	return ret;
>> +}
>> +
> Is it worth having this function?

Just to follow the same code structure as usual dequeue.

> I think it would be simple to include the code in
> rte_mempool_get_contig_blocks() below... or am I missing something?

I agree. Will do in v3.

[...]

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 2/6] mempool: implement abstract mempool info API
  2018-04-19 16:42       ` Olivier Matz
@ 2018-04-25  9:57         ` Andrew Rybchenko
  2018-04-25 10:26           ` Olivier Matz
  0 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-25  9:57 UTC (permalink / raw)
  To: Olivier Matz; +Cc: dev, Artem V. Andreev

On 04/19/2018 07:42 PM, Olivier Matz wrote:
> On Mon, Mar 26, 2018 at 05:12:55PM +0100, Andrew Rybchenko wrote:
>> From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>
>>
>> Primarily, it is intended as a way for the mempool driver to provide
>> additional information on how it lays up objects inside the mempool.
>>
>> Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> I think it's a good idea to have a way to query mempool features
> or parameters. The approach chosen in this patch looks similar
> to what we have with ethdev devinfo, right?

Yes.

> [...]
>
>>   /**
>> + * @warning
>> + * @b EXPERIMENTAL: this API may change without prior notice.
>> + *
>> + * Additional information about the mempool
>> + */
>> +struct rte_mempool_info;
>> +
> [...]
>
>> +/* wrapper to get additional mempool info */
>> +int
>> +rte_mempool_ops_get_info(const struct rte_mempool *mp,
>> +			 struct rte_mempool_info *info)
>> +{
>> +	struct rte_mempool_ops *ops;
>> +
>> +	ops = rte_mempool_get_ops(mp->ops_index);
>> +
>> +	RTE_FUNC_PTR_OR_ERR_RET(ops->get_info, -ENOTSUP);
>> +	return ops->get_info(mp, info);
>> +}
> Thinking in terms of ABI compatibility, it looks that each time we will
> add or remove a field, it will break the ABI because the info structure
> will change.
>
> Well, it's maybe nitpicking, because most of the time adding a field in
> info structure goes with adding a field in the mempool struct, which
> will anyway break the ABI.
>
> Another approach is to have a function
> rte_mempool_info_contig_block_size(mp) whose ABI can be more easily
> wrapped with VERSION_SYMBOL().
>
> On my side I'm fine with your current approach, especially given how few
> usages of VERSION_SYMBOL() we can find in DPDK. But in case you feel
> it's better to have a function...

I'd prefer to keep current solution. Otherwise it could result in too many
different functions to get various information about mempool driver
features/characteristics. Also it could be not very convenient to get
many parameters.

May be we should align info structure size to cache line to avoid size
changes in many cases? Typically it will be used on slow path and
located on caller stack and adding some bytes more should not
be a problem.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v1 2/6] mempool: implement abstract mempool info API
  2018-04-25  9:57         ` Andrew Rybchenko
@ 2018-04-25 10:26           ` Olivier Matz
  0 siblings, 0 replies; 197+ messages in thread
From: Olivier Matz @ 2018-04-25 10:26 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev, Artem V. Andreev

Hi Andrew,

> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice.
> > > + *
> > > + * Additional information about the mempool
> > > + */
> > > +struct rte_mempool_info;
> > > +
> > [...]
> > 
> > > +/* wrapper to get additional mempool info */
> > > +int
> > > +rte_mempool_ops_get_info(const struct rte_mempool *mp,
> > > +			 struct rte_mempool_info *info)
> > > +{
> > > +	struct rte_mempool_ops *ops;
> > > +
> > > +	ops = rte_mempool_get_ops(mp->ops_index);
> > > +
> > > +	RTE_FUNC_PTR_OR_ERR_RET(ops->get_info, -ENOTSUP);
> > > +	return ops->get_info(mp, info);
> > > +}
> > Thinking in terms of ABI compatibility, it looks that each time we will
> > add or remove a field, it will break the ABI because the info structure
> > will change.
> > 
> > Well, it's maybe nitpicking, because most of the time adding a field in
> > info structure goes with adding a field in the mempool struct, which
> > will anyway break the ABI.
> > 
> > Another approach is to have a function
> > rte_mempool_info_contig_block_size(mp) whose ABI can be more easily
> > wrapped with VERSION_SYMBOL().
> > 
> > On my side I'm fine with your current approach, especially given how few
> > usages of VERSION_SYMBOL() we can find in DPDK. But in case you feel
> > it's better to have a function...
> 
> I'd prefer to keep current solution. Otherwise it could result in too many
> different functions to get various information about mempool driver
> features/characteristics. Also it could be not very convenient to get
> many parameters.
> 
> May be we should align info structure size to cache line to avoid size
> changes in many cases? Typically it will be used on slow path and
> located on caller stack and adding some bytes more should not
> be a problem.

Yes, that could be a good thing to do.

Thanks,
Olivier

^ permalink raw reply	[flat|nested] 197+ messages in thread

* [PATCH v3 0/6] mempool: add bucket driver
  2017-11-24 16:06 [RFC PATCH 0/6] mempool: add bucket mempool driver Andrew Rybchenko
                   ` (9 preceding siblings ...)
  2018-04-16 13:33 ` [PATCH v2 0/6] mempool: " Andrew Rybchenko
@ 2018-04-25 16:32 ` Andrew Rybchenko
  2018-04-25 16:32   ` [PATCH v3 1/6] mempool/bucket: implement bucket mempool manager Andrew Rybchenko
                     ` (5 more replies)
  2018-04-26 10:59 ` [PATCH v4 0/5] mempool: add bucket driver Andrew Rybchenko
  11 siblings, 6 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-25 16:32 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ

The initial patch series [1] (RFCv1 is [2]) is split into two to simplify
processing.  It is the second part which relies on the first one [3]
which is already applied.

The patch series adds bucket mempool driver which allows to allocate
(both physically and virtually) contiguous blocks of objects and adds
mempool API to do it. It is still capable to provide separate objects,
but it is definitely more heavy-weight than ring/stack drivers.
The driver will be used by the future Solarflare driver enhancements
which allow to utilize physical contiguous blocks in the NIC firmware.

The target usecase is dequeue in blocks and enqueue separate objects
back (which are collected in buckets to be dequeued). So, the memory
pool with bucket driver is created by an application and provided to
networking PMD receive queue. The choice of bucket driver is done using
rte_eth_dev_pool_ops_supported(). A PMD that relies upon contiguous
block allocation should report the bucket driver as the only supported
and preferred one.

Introduction of the contiguous block dequeue operation is proven by
performance measurements using autotest with minor enhancements:
 - in the original test bulks are powers of two, which is unacceptable
   for us, so they are changed to multiple of contig_block_size;
 - the test code is duplicated to support plain dequeue and
   dequeue_contig_blocks;
 - all the extra test variations (with/without cache etc) are eliminated;
 - a fake read from the dequeued buffer is added (in both cases) to
   simulate mbufs access.

start performance test for bucket (without cache)
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Srate_persec=   111935488
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Srate_persec=   115290931
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Srate_persec=   353055539
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Srate_persec=   353330790
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Srate_persec=   224657407
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Srate_persec=   230411468
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Srate_persec=   706700902
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Srate_persec=   703673139
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Srate_persec=   425236887
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Srate_persec=   437295512
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Srate_persec=  1343409356
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Srate_persec=  1336567397
start performance test for bucket (without cache + contiguous dequeue)
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Crate_persec=   122945536
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Crate_persec=   126458265
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Crate_persec=   374262988
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Crate_persec=   377316966
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Crate_persec=   244842496
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Crate_persec=   251618917
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Crate_persec=   751226060
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Crate_persec=   756233010
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Crate_persec=   462068120
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Crate_persec=   476997221
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Crate_persec=  1432171313
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Crate_persec=  1438829771

The number of objects in the contiguous block is a function of bucket
memory size (.config option) and total element size. In the future
additional API with possibility to pass parameters on mempool allocation
may be added.

It breaks ABI since changes rte_mempool_ops. The ABI version is already
bumped in [4].

I've double-checked that mempool_autotest and mempool_perf_autotest
work fine if EAL argument --mbuf-pool-ops-name=bucket is used.

mempool_perf_autotest as is for bucket shows less rate than ring_mp_mc
since test dequeue bulk sizes are not aligned to contintiguous block size
and bucket driver is optimized for contiguous blocks allocation (or at
least allocation in bulks multiple by contiguous block size).

However, real usage of the bucket driver even without contiguous block
dequeue (transmit only benchmark which simply generates traffic) shows
better packet rate. It looks like it is because the driver is
stack-based (per lcore without locks/barriers) and it improves cache
hit (working memory is smaller since it is a subset of the mempool
instead of entire mempool when some objects do not fit into mempool cache).

Unfortunately I've not finalized yet patches which allow to repeat above
measurements (done using hacks).

The driver is required for [5].


[1] https://dpdk.org/ml/archives/dev/2018-January/088698.html
[2] https://dpdk.org/ml/archives/dev/2017-November/082335.html
[3] https://dpdk.org/ml/archives/dev/2018-April/097354.html
[4] https://dpdk.org/ml/archives/dev/2018-April/097352.html
[5] https://dpdk.org/ml/archives/dev/2018-April/098089.html

v2 -> v3:
 - rebase
 - align rte_mempool_info structure size to avoid ABI breakages in a
   number of cases when something relative small added
 - fix bug in get_count because of not counted objects in the
   adaptation rings
 - squash __mempool_generic_get_contig_blocks() into
   rte_mempool_get_contig_blocks()
 - fix typo in documentation

v1 -> v2:
  - just rebase

RFCv2 -> v1:
  - rebased on top of [3]
  - cleanup deprecation notice when it is done
  - mark a new API experimental
  - move contig blocks dequeue debug checks/processing to the library function
  - add contig blocks get stats
  - add release notes

RFCv1 -> RFCv2:
  - change info API to get information from driver required to
    API user to know contiguous block size
  - use SPDX tags
  - avoid all objects affinity to single lcore
  - fix bucket get_count
  - fix NO_CACHE_ALIGN case in bucket mempool


Andrew Rybchenko (1):
  doc: advertise bucket mempool driver

Artem V. Andreev (5):
  mempool/bucket: implement bucket mempool manager
  mempool: implement abstract mempool info API
  mempool: support block dequeue operation
  mempool/bucket: implement block dequeue operation
  mempool/bucket: do not allow one lcore to grab all buckets

 MAINTAINERS                                        |   9 +
 config/common_base                                 |   2 +
 doc/guides/rel_notes/deprecation.rst               |   7 -
 doc/guides/rel_notes/release_18_05.rst             |  10 +-
 drivers/mempool/Makefile                           |   1 +
 drivers/mempool/bucket/Makefile                    |  27 +
 drivers/mempool/bucket/meson.build                 |   9 +
 drivers/mempool/bucket/rte_mempool_bucket.c        | 628 +++++++++++++++++++++
 .../mempool/bucket/rte_mempool_bucket_version.map  |   4 +
 lib/librte_mempool/Makefile                        |   1 +
 lib/librte_mempool/meson.build                     |   2 +
 lib/librte_mempool/rte_mempool.c                   |  39 ++
 lib/librte_mempool/rte_mempool.h                   | 171 ++++++
 lib/librte_mempool/rte_mempool_ops.c               |  16 +
 lib/librte_mempool/rte_mempool_version.map         |   8 +
 mk/rte.app.mk                                      |   1 +
 16 files changed, 927 insertions(+), 8 deletions(-)
 create mode 100644 drivers/mempool/bucket/Makefile
 create mode 100644 drivers/mempool/bucket/meson.build
 create mode 100644 drivers/mempool/bucket/rte_mempool_bucket.c
 create mode 100644 drivers/mempool/bucket/rte_mempool_bucket_version.map

-- 
2.14.1

^ permalink raw reply	[flat|nested] 197+ messages in thread

* [PATCH v3 1/6] mempool/bucket: implement bucket mempool manager
  2018-04-25 16:32 ` [PATCH v3 " Andrew Rybchenko
@ 2018-04-25 16:32   ` Andrew Rybchenko
  2018-04-25 16:32   ` [PATCH v3 2/6] mempool: implement abstract mempool info API Andrew Rybchenko
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-25 16:32 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

The manager provides a way to allocate physically and virtually
contiguous set of objects.

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 MAINTAINERS                                        |   9 +
 config/common_base                                 |   2 +
 drivers/mempool/Makefile                           |   1 +
 drivers/mempool/bucket/Makefile                    |  27 +
 drivers/mempool/bucket/meson.build                 |   9 +
 drivers/mempool/bucket/rte_mempool_bucket.c        | 563 +++++++++++++++++++++
 .../mempool/bucket/rte_mempool_bucket_version.map  |   4 +
 mk/rte.app.mk                                      |   1 +
 8 files changed, 616 insertions(+)
 create mode 100644 drivers/mempool/bucket/Makefile
 create mode 100644 drivers/mempool/bucket/meson.build
 create mode 100644 drivers/mempool/bucket/rte_mempool_bucket.c
 create mode 100644 drivers/mempool/bucket/rte_mempool_bucket_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index ec0b4845f..97dd70782 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -364,6 +364,15 @@ F: test/test/test_rawdev.c
 F: doc/guides/prog_guide/rawdev.rst
 
 
+Memory Pool Drivers
+-------------------
+
+Bucket memory pool
+M: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
+M: Andrew Rybchenko <arybchenko@solarflare.com>
+F: drivers/mempool/bucket/
+
+
 Bus Drivers
 -----------
 
diff --git a/config/common_base b/config/common_base
index 2787eb66e..03a8688b5 100644
--- a/config/common_base
+++ b/config/common_base
@@ -633,6 +633,8 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
 #
 # Compile Mempool drivers
 #
+CONFIG_RTE_DRIVER_MEMPOOL_BUCKET=y
+CONFIG_RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB=64
 CONFIG_RTE_DRIVER_MEMPOOL_RING=y
 CONFIG_RTE_DRIVER_MEMPOOL_STACK=y
 
diff --git a/drivers/mempool/Makefile b/drivers/mempool/Makefile
index fc8b73b38..28c2e8360 100644
--- a/drivers/mempool/Makefile
+++ b/drivers/mempool/Makefile
@@ -3,6 +3,7 @@
 
 include $(RTE_SDK)/mk/rte.vars.mk
 
+DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_BUCKET) += bucket
 ifeq ($(CONFIG_RTE_LIBRTE_DPAA_BUS),y)
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL) += dpaa
 endif
diff --git a/drivers/mempool/bucket/Makefile b/drivers/mempool/bucket/Makefile
new file mode 100644
index 000000000..7364916bc
--- /dev/null
+++ b/drivers/mempool/bucket/Makefile
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: BSD-3-Clause
+#
+# Copyright (c) 2017-2018 Solarflare Communications Inc.
+# All rights reserved.
+#
+# This software was jointly developed between OKTET Labs (under contract
+# for Solarflare) and Solarflare Communications, Inc.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_mempool_bucket.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+LDLIBS += -lrte_eal -lrte_mempool -lrte_ring
+
+EXPORT_MAP := rte_mempool_bucket_version.map
+
+LIBABIVER := 1
+
+SRCS-$(CONFIG_RTE_DRIVER_MEMPOOL_BUCKET) += rte_mempool_bucket.c
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/mempool/bucket/meson.build b/drivers/mempool/bucket/meson.build
new file mode 100644
index 000000000..618d79128
--- /dev/null
+++ b/drivers/mempool/bucket/meson.build
@@ -0,0 +1,9 @@
+# SPDX-License-Identifier: BSD-3-Clause
+#
+# Copyright (c) 2017-2018 Solarflare Communications Inc.
+# All rights reserved.
+#
+# This software was jointly developed between OKTET Labs (under contract
+# for Solarflare) and Solarflare Communications, Inc.
+
+sources = files('rte_mempool_bucket.c')
diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c
new file mode 100644
index 000000000..ef822eb2a
--- /dev/null
+++ b/drivers/mempool/bucket/rte_mempool_bucket.c
@@ -0,0 +1,563 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2017-2018 Solarflare Communications Inc.
+ * All rights reserved.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+#include <stdbool.h>
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+
+/*
+ * The general idea of the bucket mempool driver is as follows.
+ * We keep track of physically contiguous groups (buckets) of objects
+ * of a certain size. Every such a group has a counter that is
+ * incremented every time an object from that group is enqueued.
+ * Until the bucket is full, no objects from it are eligible for allocation.
+ * If a request is made to dequeue a multiply of bucket size, it is
+ * satisfied by returning the whole buckets, instead of separate objects.
+ */
+
+
+struct bucket_header {
+	unsigned int lcore_id;
+	uint8_t fill_cnt;
+};
+
+struct bucket_stack {
+	unsigned int top;
+	unsigned int limit;
+	void *objects[];
+};
+
+struct bucket_data {
+	unsigned int header_size;
+	unsigned int total_elt_size;
+	unsigned int obj_per_bucket;
+	uintptr_t bucket_page_mask;
+	struct rte_ring *shared_bucket_ring;
+	struct bucket_stack *buckets[RTE_MAX_LCORE];
+	/*
+	 * Multi-producer single-consumer ring to hold objects that are
+	 * returned to the mempool at a different lcore than initially
+	 * dequeued
+	 */
+	struct rte_ring *adoption_buffer_rings[RTE_MAX_LCORE];
+	struct rte_ring *shared_orphan_ring;
+	struct rte_mempool *pool;
+	unsigned int bucket_mem_size;
+};
+
+static struct bucket_stack *
+bucket_stack_create(const struct rte_mempool *mp, unsigned int n_elts)
+{
+	struct bucket_stack *stack;
+
+	stack = rte_zmalloc_socket("bucket_stack",
+				   sizeof(struct bucket_stack) +
+				   n_elts * sizeof(void *),
+				   RTE_CACHE_LINE_SIZE,
+				   mp->socket_id);
+	if (stack == NULL)
+		return NULL;
+	stack->limit = n_elts;
+	stack->top = 0;
+
+	return stack;
+}
+
+static void
+bucket_stack_push(struct bucket_stack *stack, void *obj)
+{
+	RTE_ASSERT(stack->top < stack->limit);
+	stack->objects[stack->top++] = obj;
+}
+
+static void *
+bucket_stack_pop_unsafe(struct bucket_stack *stack)
+{
+	RTE_ASSERT(stack->top > 0);
+	return stack->objects[--stack->top];
+}
+
+static void *
+bucket_stack_pop(struct bucket_stack *stack)
+{
+	if (stack->top == 0)
+		return NULL;
+	return bucket_stack_pop_unsafe(stack);
+}
+
+static int
+bucket_enqueue_single(struct bucket_data *bd, void *obj)
+{
+	int rc = 0;
+	uintptr_t addr = (uintptr_t)obj;
+	struct bucket_header *hdr;
+	unsigned int lcore_id = rte_lcore_id();
+
+	addr &= bd->bucket_page_mask;
+	hdr = (struct bucket_header *)addr;
+
+	if (likely(hdr->lcore_id == lcore_id)) {
+		if (hdr->fill_cnt < bd->obj_per_bucket - 1) {
+			hdr->fill_cnt++;
+		} else {
+			hdr->fill_cnt = 0;
+			/* Stack is big enough to put all buckets */
+			bucket_stack_push(bd->buckets[lcore_id], hdr);
+		}
+	} else if (hdr->lcore_id != LCORE_ID_ANY) {
+		struct rte_ring *adopt_ring =
+			bd->adoption_buffer_rings[hdr->lcore_id];
+
+		rc = rte_ring_enqueue(adopt_ring, obj);
+		/* Ring is big enough to put all objects */
+		RTE_ASSERT(rc == 0);
+	} else if (hdr->fill_cnt < bd->obj_per_bucket - 1) {
+		hdr->fill_cnt++;
+	} else {
+		hdr->fill_cnt = 0;
+		rc = rte_ring_enqueue(bd->shared_bucket_ring, hdr);
+		/* Ring is big enough to put all buckets */
+		RTE_ASSERT(rc == 0);
+	}
+
+	return rc;
+}
+
+static int
+bucket_enqueue(struct rte_mempool *mp, void * const *obj_table,
+	       unsigned int n)
+{
+	struct bucket_data *bd = mp->pool_data;
+	unsigned int i;
+	int rc = 0;
+
+	for (i = 0; i < n; i++) {
+		rc = bucket_enqueue_single(bd, obj_table[i]);
+		RTE_ASSERT(rc == 0);
+	}
+	return rc;
+}
+
+static void **
+bucket_fill_obj_table(const struct bucket_data *bd, void **pstart,
+		      void **obj_table, unsigned int n)
+{
+	unsigned int i;
+	uint8_t *objptr = *pstart;
+
+	for (objptr += bd->header_size, i = 0; i < n;
+	     i++, objptr += bd->total_elt_size)
+		*obj_table++ = objptr;
+	*pstart = objptr;
+	return obj_table;
+}
+
+static int
+bucket_dequeue_orphans(struct bucket_data *bd, void **obj_table,
+		       unsigned int n_orphans)
+{
+	unsigned int i;
+	int rc;
+	uint8_t *objptr;
+
+	rc = rte_ring_dequeue_bulk(bd->shared_orphan_ring, obj_table,
+				   n_orphans, NULL);
+	if (unlikely(rc != (int)n_orphans)) {
+		struct bucket_header *hdr;
+
+		objptr = bucket_stack_pop(bd->buckets[rte_lcore_id()]);
+		hdr = (struct bucket_header *)objptr;
+
+		if (objptr == NULL) {
+			rc = rte_ring_dequeue(bd->shared_bucket_ring,
+					      (void **)&objptr);
+			if (rc != 0) {
+				rte_errno = ENOBUFS;
+				return -rte_errno;
+			}
+			hdr = (struct bucket_header *)objptr;
+			hdr->lcore_id = rte_lcore_id();
+		}
+		hdr->fill_cnt = 0;
+		bucket_fill_obj_table(bd, (void **)&objptr, obj_table,
+				      n_orphans);
+		for (i = n_orphans; i < bd->obj_per_bucket; i++,
+			     objptr += bd->total_elt_size) {
+			rc = rte_ring_enqueue(bd->shared_orphan_ring,
+					      objptr);
+			if (rc != 0) {
+				RTE_ASSERT(0);
+				rte_errno = -rc;
+				return rc;
+			}
+		}
+	}
+
+	return 0;
+}
+
+static int
+bucket_dequeue_buckets(struct bucket_data *bd, void **obj_table,
+		       unsigned int n_buckets)
+{
+	struct bucket_stack *cur_stack = bd->buckets[rte_lcore_id()];
+	unsigned int n_buckets_from_stack = RTE_MIN(n_buckets, cur_stack->top);
+	void **obj_table_base = obj_table;
+
+	n_buckets -= n_buckets_from_stack;
+	while (n_buckets_from_stack-- > 0) {
+		void *obj = bucket_stack_pop_unsafe(cur_stack);
+
+		obj_table = bucket_fill_obj_table(bd, &obj, obj_table,
+						  bd->obj_per_bucket);
+	}
+	while (n_buckets-- > 0) {
+		struct bucket_header *hdr;
+
+		if (unlikely(rte_ring_dequeue(bd->shared_bucket_ring,
+					      (void **)&hdr) != 0)) {
+			/*
+			 * Return the already-dequeued buffers
+			 * back to the mempool
+			 */
+			bucket_enqueue(bd->pool, obj_table_base,
+				       obj_table - obj_table_base);
+			rte_errno = ENOBUFS;
+			return -rte_errno;
+		}
+		hdr->lcore_id = rte_lcore_id();
+		obj_table = bucket_fill_obj_table(bd, (void **)&hdr,
+						  obj_table,
+						  bd->obj_per_bucket);
+	}
+
+	return 0;
+}
+
+static int
+bucket_adopt_orphans(struct bucket_data *bd)
+{
+	int rc = 0;
+	struct rte_ring *adopt_ring =
+		bd->adoption_buffer_rings[rte_lcore_id()];
+
+	if (unlikely(!rte_ring_empty(adopt_ring))) {
+		void *orphan;
+
+		while (rte_ring_sc_dequeue(adopt_ring, &orphan) == 0) {
+			rc = bucket_enqueue_single(bd, orphan);
+			RTE_ASSERT(rc == 0);
+		}
+	}
+	return rc;
+}
+
+static int
+bucket_dequeue(struct rte_mempool *mp, void **obj_table, unsigned int n)
+{
+	struct bucket_data *bd = mp->pool_data;
+	unsigned int n_buckets = n / bd->obj_per_bucket;
+	unsigned int n_orphans = n - n_buckets * bd->obj_per_bucket;
+	int rc = 0;
+
+	bucket_adopt_orphans(bd);
+
+	if (unlikely(n_orphans > 0)) {
+		rc = bucket_dequeue_orphans(bd, obj_table +
+					    (n_buckets * bd->obj_per_bucket),
+					    n_orphans);
+		if (rc != 0)
+			return rc;
+	}
+
+	if (likely(n_buckets > 0)) {
+		rc = bucket_dequeue_buckets(bd, obj_table, n_buckets);
+		if (unlikely(rc != 0) && n_orphans > 0) {
+			rte_ring_enqueue_bulk(bd->shared_orphan_ring,
+					      obj_table + (n_buckets *
+							   bd->obj_per_bucket),
+					      n_orphans, NULL);
+		}
+	}
+
+	return rc;
+}
+
+static void
+count_underfilled_buckets(struct rte_mempool *mp,
+			  void *opaque,
+			  struct rte_mempool_memhdr *memhdr,
+			  __rte_unused unsigned int mem_idx)
+{
+	unsigned int *pcount = opaque;
+	const struct bucket_data *bd = mp->pool_data;
+	unsigned int bucket_page_sz =
+		(unsigned int)(~bd->bucket_page_mask + 1);
+	uintptr_t align;
+	uint8_t *iter;
+
+	align = (uintptr_t)RTE_PTR_ALIGN_CEIL(memhdr->addr, bucket_page_sz) -
+		(uintptr_t)memhdr->addr;
+
+	for (iter = (uint8_t *)memhdr->addr + align;
+	     iter < (uint8_t *)memhdr->addr + memhdr->len;
+	     iter += bucket_page_sz) {
+		struct bucket_header *hdr = (struct bucket_header *)iter;
+
+		*pcount += hdr->fill_cnt;
+	}
+}
+
+static unsigned int
+bucket_get_count(const struct rte_mempool *mp)
+{
+	const struct bucket_data *bd = mp->pool_data;
+	unsigned int count =
+		bd->obj_per_bucket * rte_ring_count(bd->shared_bucket_ring) +
+		rte_ring_count(bd->shared_orphan_ring);
+	unsigned int i;
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		if (!rte_lcore_is_enabled(i))
+			continue;
+		count += bd->obj_per_bucket * bd->buckets[i]->top +
+			rte_ring_count(bd->adoption_buffer_rings[i]);
+	}
+
+	rte_mempool_mem_iter((struct rte_mempool *)(uintptr_t)mp,
+			     count_underfilled_buckets, &count);
+
+	return count;
+}
+
+static int
+bucket_alloc(struct rte_mempool *mp)
+{
+	int rg_flags = 0;
+	int rc = 0;
+	char rg_name[RTE_RING_NAMESIZE];
+	struct bucket_data *bd;
+	unsigned int i;
+	unsigned int bucket_header_size;
+
+	bd = rte_zmalloc_socket("bucket_pool", sizeof(*bd),
+				RTE_CACHE_LINE_SIZE, mp->socket_id);
+	if (bd == NULL) {
+		rc = -ENOMEM;
+		goto no_mem_for_data;
+	}
+	bd->pool = mp;
+	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+		bucket_header_size = sizeof(struct bucket_header);
+	else
+		bucket_header_size = RTE_CACHE_LINE_SIZE;
+	RTE_BUILD_BUG_ON(sizeof(struct bucket_header) > RTE_CACHE_LINE_SIZE);
+	bd->header_size = mp->header_size + bucket_header_size;
+	bd->total_elt_size = mp->header_size + mp->elt_size + mp->trailer_size;
+	bd->bucket_mem_size = RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB * 1024;
+	bd->obj_per_bucket = (bd->bucket_mem_size - bucket_header_size) /
+		bd->total_elt_size;
+	bd->bucket_page_mask = ~(rte_align64pow2(bd->bucket_mem_size) - 1);
+
+	if (mp->flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (mp->flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		if (!rte_lcore_is_enabled(i))
+			continue;
+		bd->buckets[i] =
+			bucket_stack_create(mp, mp->size / bd->obj_per_bucket);
+		if (bd->buckets[i] == NULL) {
+			rc = -ENOMEM;
+			goto no_mem_for_stacks;
+		}
+		rc = snprintf(rg_name, sizeof(rg_name),
+			      RTE_MEMPOOL_MZ_FORMAT ".a%u", mp->name, i);
+		if (rc < 0 || rc >= (int)sizeof(rg_name)) {
+			rc = -ENAMETOOLONG;
+			goto no_mem_for_stacks;
+		}
+		bd->adoption_buffer_rings[i] =
+			rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+					mp->socket_id,
+					rg_flags | RING_F_SC_DEQ);
+		if (bd->adoption_buffer_rings[i] == NULL) {
+			rc = -rte_errno;
+			goto no_mem_for_stacks;
+		}
+	}
+
+	rc = snprintf(rg_name, sizeof(rg_name),
+		      RTE_MEMPOOL_MZ_FORMAT ".0", mp->name);
+	if (rc < 0 || rc >= (int)sizeof(rg_name)) {
+		rc = -ENAMETOOLONG;
+		goto invalid_shared_orphan_ring;
+	}
+	bd->shared_orphan_ring =
+		rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+				mp->socket_id, rg_flags);
+	if (bd->shared_orphan_ring == NULL) {
+		rc = -rte_errno;
+		goto cannot_create_shared_orphan_ring;
+	}
+
+	rc = snprintf(rg_name, sizeof(rg_name),
+		       RTE_MEMPOOL_MZ_FORMAT ".1", mp->name);
+	if (rc < 0 || rc >= (int)sizeof(rg_name)) {
+		rc = -ENAMETOOLONG;
+		goto invalid_shared_bucket_ring;
+	}
+	bd->shared_bucket_ring =
+		rte_ring_create(rg_name,
+				rte_align32pow2((mp->size + 1) /
+						bd->obj_per_bucket),
+				mp->socket_id, rg_flags);
+	if (bd->shared_bucket_ring == NULL) {
+		rc = -rte_errno;
+		goto cannot_create_shared_bucket_ring;
+	}
+
+	mp->pool_data = bd;
+
+	return 0;
+
+cannot_create_shared_bucket_ring:
+invalid_shared_bucket_ring:
+	rte_ring_free(bd->shared_orphan_ring);
+cannot_create_shared_orphan_ring:
+invalid_shared_orphan_ring:
+no_mem_for_stacks:
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		rte_free(bd->buckets[i]);
+		rte_ring_free(bd->adoption_buffer_rings[i]);
+	}
+	rte_free(bd);
+no_mem_for_data:
+	rte_errno = -rc;
+	return rc;
+}
+
+static void
+bucket_free(struct rte_mempool *mp)
+{
+	unsigned int i;
+	struct bucket_data *bd = mp->pool_data;
+
+	if (bd == NULL)
+		return;
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		rte_free(bd->buckets[i]);
+		rte_ring_free(bd->adoption_buffer_rings[i]);
+	}
+
+	rte_ring_free(bd->shared_orphan_ring);
+	rte_ring_free(bd->shared_bucket_ring);
+
+	rte_free(bd);
+}
+
+static ssize_t
+bucket_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num,
+		     __rte_unused uint32_t pg_shift, size_t *min_total_elt_size,
+		     size_t *align)
+{
+	struct bucket_data *bd = mp->pool_data;
+	unsigned int bucket_page_sz;
+
+	if (bd == NULL)
+		return -EINVAL;
+
+	bucket_page_sz = rte_align32pow2(bd->bucket_mem_size);
+	*align = bucket_page_sz;
+	*min_total_elt_size = bucket_page_sz;
+	/*
+	 * Each bucket occupies its own block aligned to
+	 * bucket_page_sz, so the required amount of memory is
+	 * a multiple of bucket_page_sz.
+	 * We also need extra space for a bucket header
+	 */
+	return ((obj_num + bd->obj_per_bucket - 1) /
+		bd->obj_per_bucket) * bucket_page_sz;
+}
+
+static int
+bucket_populate(struct rte_mempool *mp, unsigned int max_objs,
+		void *vaddr, rte_iova_t iova, size_t len,
+		rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
+{
+	struct bucket_data *bd = mp->pool_data;
+	unsigned int bucket_page_sz;
+	unsigned int bucket_header_sz;
+	unsigned int n_objs;
+	uintptr_t align;
+	uint8_t *iter;
+	int rc;
+
+	if (bd == NULL)
+		return -EINVAL;
+
+	bucket_page_sz = rte_align32pow2(bd->bucket_mem_size);
+	align = RTE_PTR_ALIGN_CEIL((uintptr_t)vaddr, bucket_page_sz) -
+		(uintptr_t)vaddr;
+
+	bucket_header_sz = bd->header_size - mp->header_size;
+	if (iova != RTE_BAD_IOVA)
+		iova += align + bucket_header_sz;
+
+	for (iter = (uint8_t *)vaddr + align, n_objs = 0;
+	     iter < (uint8_t *)vaddr + len && n_objs < max_objs;
+	     iter += bucket_page_sz) {
+		struct bucket_header *hdr = (struct bucket_header *)iter;
+		unsigned int chunk_len = bd->bucket_mem_size;
+
+		if ((size_t)(iter - (uint8_t *)vaddr) + chunk_len > len)
+			chunk_len = len - (iter - (uint8_t *)vaddr);
+		if (chunk_len <= bucket_header_sz)
+			break;
+		chunk_len -= bucket_header_sz;
+
+		hdr->fill_cnt = 0;
+		hdr->lcore_id = LCORE_ID_ANY;
+		rc = rte_mempool_op_populate_default(mp,
+						     RTE_MIN(bd->obj_per_bucket,
+							     max_objs - n_objs),
+						     iter + bucket_header_sz,
+						     iova, chunk_len,
+						     obj_cb, obj_cb_arg);
+		if (rc < 0)
+			return rc;
+		n_objs += rc;
+		if (iova != RTE_BAD_IOVA)
+			iova += bucket_page_sz;
+	}
+
+	return n_objs;
+}
+
+static const struct rte_mempool_ops ops_bucket = {
+	.name = "bucket",
+	.alloc = bucket_alloc,
+	.free = bucket_free,
+	.enqueue = bucket_enqueue,
+	.dequeue = bucket_dequeue,
+	.get_count = bucket_get_count,
+	.calc_mem_size = bucket_calc_mem_size,
+	.populate = bucket_populate,
+};
+
+
+MEMPOOL_REGISTER_OPS(ops_bucket);
diff --git a/drivers/mempool/bucket/rte_mempool_bucket_version.map b/drivers/mempool/bucket/rte_mempool_bucket_version.map
new file mode 100644
index 000000000..9b9ab1a4c
--- /dev/null
+++ b/drivers/mempool/bucket/rte_mempool_bucket_version.map
@@ -0,0 +1,4 @@
+DPDK_18.05 {
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 1584800ce..29a2a6095 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -125,6 +125,7 @@ endif
 ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),n)
 # plugins (link only if static libraries)
 
+_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_BUCKET) += -lrte_mempool_bucket
 _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_STACK)  += -lrte_mempool_stack
 ifeq ($(CONFIG_RTE_LIBRTE_DPAA_BUS),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL)   += -lrte_mempool_dpaa
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v3 2/6] mempool: implement abstract mempool info API
  2018-04-25 16:32 ` [PATCH v3 " Andrew Rybchenko
  2018-04-25 16:32   ` [PATCH v3 1/6] mempool/bucket: implement bucket mempool manager Andrew Rybchenko
@ 2018-04-25 16:32   ` Andrew Rybchenko
  2018-04-25 16:32   ` [PATCH v3 3/6] mempool: support block dequeue operation Andrew Rybchenko
                     ` (3 subsequent siblings)
  5 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-25 16:32 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Primarily, it is intended as a way for the mempool driver to provide
additional information on how it lays up objects inside the mempool.

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 lib/librte_mempool/rte_mempool.h           | 50 ++++++++++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c       | 15 +++++++++
 lib/librte_mempool/rte_mempool_version.map |  7 +++++
 3 files changed, 72 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 3e06ae051..853f2da4d 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -189,6 +189,23 @@ struct rte_mempool_memhdr {
 	void *opaque;            /**< Argument passed to the free callback */
 };
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Additional information about the mempool
+ *
+ * The structure is cache-line aligned to avoid ABI breakages in
+ * a number of cases when something small is added.
+ */
+struct rte_mempool_info {
+	/*
+	 * Dummy structure member to make it non emtpy until the first
+	 * real member is added.
+	 */
+	unsigned int dummy;
+} __rte_cache_aligned;
+
 /**
  * The RTE mempool structure.
  */
@@ -499,6 +516,16 @@ int rte_mempool_op_populate_default(struct rte_mempool *mp,
 		void *vaddr, rte_iova_t iova, size_t len,
 		rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Get some additional information about a mempool.
+ */
+typedef int (*rte_mempool_get_info_t)(const struct rte_mempool *mp,
+		struct rte_mempool_info *info);
+
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -517,6 +544,10 @@ struct rte_mempool_ops {
 	 * provided memory chunk.
 	 */
 	rte_mempool_populate_t populate;
+	/**
+	 * Get mempool info
+	 */
+	rte_mempool_get_info_t get_info;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -679,6 +710,25 @@ int rte_mempool_ops_populate(struct rte_mempool *mp, unsigned int max_objs,
 			     rte_mempool_populate_obj_cb_t *obj_cb,
 			     void *obj_cb_arg);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Wrapper for mempool_ops get_info callback.
+ *
+ * @param[in] mp
+ *   Pointer to the memory pool.
+ * @param[out] info
+ *   Pointer to the rte_mempool_info structure
+ * @return
+ *   - 0: Success; The mempool driver supports retrieving supplementary
+ *        mempool information
+ *   - -ENOTSUP - doesn't support get_info ops (valid case).
+ */
+__rte_experimental
+int rte_mempool_ops_get_info(const struct rte_mempool *mp,
+			 struct rte_mempool_info *info);
+
 /**
  * @internal wrapper for mempool_ops free callback.
  *
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index ea9be1eb2..efc1c084c 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -59,6 +59,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->get_count = h->get_count;
 	ops->calc_mem_size = h->calc_mem_size;
 	ops->populate = h->populate;
+	ops->get_info = h->get_info;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -134,6 +135,20 @@ rte_mempool_ops_populate(struct rte_mempool *mp, unsigned int max_objs,
 			     obj_cb_arg);
 }
 
+/* wrapper to get additional mempool info */
+int
+rte_mempool_ops_get_info(const struct rte_mempool *mp,
+			 struct rte_mempool_info *info)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->get_info, -ENOTSUP);
+	return ops->get_info(mp, info);
+}
+
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index cf375dbe6..c9d16ecc4 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -57,3 +57,10 @@ DPDK_18.05 {
 	rte_mempool_op_populate_default;
 
 } DPDK_17.11;
+
+EXPERIMENTAL {
+	global:
+
+	rte_mempool_ops_get_info;
+
+} DPDK_18.05;
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v3 3/6] mempool: support block dequeue operation
  2018-04-25 16:32 ` [PATCH v3 " Andrew Rybchenko
  2018-04-25 16:32   ` [PATCH v3 1/6] mempool/bucket: implement bucket mempool manager Andrew Rybchenko
  2018-04-25 16:32   ` [PATCH v3 2/6] mempool: implement abstract mempool info API Andrew Rybchenko
@ 2018-04-25 16:32   ` Andrew Rybchenko
  2018-04-25 16:32   ` [PATCH v3 4/6] mempool/bucket: implement " Andrew Rybchenko
                     ` (2 subsequent siblings)
  5 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-25 16:32 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

If mempool manager supports object blocks (physically and virtual
contiguous set of objects), it is sufficient to get the first
object only and the function allows to avoid filling in of
information about each block member.

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 doc/guides/rel_notes/deprecation.rst       |   7 --
 lib/librte_mempool/Makefile                |   1 +
 lib/librte_mempool/meson.build             |   2 +
 lib/librte_mempool/rte_mempool.c           |  39 +++++++++
 lib/librte_mempool/rte_mempool.h           | 131 +++++++++++++++++++++++++++--
 lib/librte_mempool/rte_mempool_ops.c       |   1 +
 lib/librte_mempool/rte_mempool_version.map |   1 +
 7 files changed, 170 insertions(+), 12 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index bce97a2a9..faf1e527e 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -42,13 +42,6 @@ Deprecation Notices
 
   - ``rte_eal_mbuf_default_mempool_ops``
 
-* mempool: several API and ABI changes are planned in v18.05.
-
-  The following changes are planned:
-
-  - addition of new op to allocate contiguous
-    block of objects if underlying driver supports it.
-
 * mbuf: The opaque ``mbuf->hash.sched`` field will be updated to support generic
   definition in line with the ethdev TM and MTR APIs. Currently, this field
   is defined in librte_sched in a non-generic way. The new generic format
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 7f19f005a..e3c32b14f 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -10,6 +10,7 @@ CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
 # Allow deprecated symbol to use deprecated rte_mempool_populate_iova_tab()
 # from earlier deprecated rte_mempool_populate_phys_tab()
 CFLAGS += -Wno-deprecated-declarations
+CFLAGS += -DALLOW_EXPERIMENTAL_API
 LDLIBS += -lrte_eal -lrte_ring
 
 EXPORT_MAP := rte_mempool_version.map
diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build
index baf2d24d5..d507e5511 100644
--- a/lib/librte_mempool/meson.build
+++ b/lib/librte_mempool/meson.build
@@ -1,6 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2017 Intel Corporation
 
+allow_experimental_apis = true
+
 extra_flags = []
 
 # Allow deprecated symbol to use deprecated rte_mempool_populate_iova_tab()
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 84b3d640f..cf5d124ec 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -1255,6 +1255,36 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 #endif
 }
 
+void
+rte_mempool_contig_blocks_check_cookies(const struct rte_mempool *mp,
+	void * const *first_obj_table_const, unsigned int n, int free)
+{
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	struct rte_mempool_info info;
+	const size_t total_elt_sz =
+		mp->header_size + mp->elt_size + mp->trailer_size;
+	unsigned int i, j;
+
+	rte_mempool_ops_get_info(mp, &info);
+
+	for (i = 0; i < n; ++i) {
+		void *first_obj = first_obj_table_const[i];
+
+		for (j = 0; j < info.contig_block_size; ++j) {
+			void *obj;
+
+			obj = (void *)((uintptr_t)first_obj + j * total_elt_sz);
+			rte_mempool_check_cookies(mp, &obj, 1, free);
+		}
+	}
+#else
+	RTE_SET_USED(mp);
+	RTE_SET_USED(first_obj_table_const);
+	RTE_SET_USED(n);
+	RTE_SET_USED(free);
+#endif
+}
+
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 static void
 mempool_obj_audit(struct rte_mempool *mp, __rte_unused void *opaque,
@@ -1320,6 +1350,7 @@ void
 rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 {
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	struct rte_mempool_info info;
 	struct rte_mempool_debug_stats sum;
 	unsigned lcore_id;
 #endif
@@ -1361,6 +1392,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 
 	/* sum and dump statistics */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	rte_mempool_ops_get_info(mp, &info);
 	memset(&sum, 0, sizeof(sum));
 	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
 		sum.put_bulk += mp->stats[lcore_id].put_bulk;
@@ -1369,6 +1401,8 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 		sum.get_success_objs += mp->stats[lcore_id].get_success_objs;
 		sum.get_fail_bulk += mp->stats[lcore_id].get_fail_bulk;
 		sum.get_fail_objs += mp->stats[lcore_id].get_fail_objs;
+		sum.get_success_blks += mp->stats[lcore_id].get_success_blks;
+		sum.get_fail_blks += mp->stats[lcore_id].get_fail_blks;
 	}
 	fprintf(f, "  stats:\n");
 	fprintf(f, "    put_bulk=%"PRIu64"\n", sum.put_bulk);
@@ -1377,6 +1411,11 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 	fprintf(f, "    get_success_objs=%"PRIu64"\n", sum.get_success_objs);
 	fprintf(f, "    get_fail_bulk=%"PRIu64"\n", sum.get_fail_bulk);
 	fprintf(f, "    get_fail_objs=%"PRIu64"\n", sum.get_fail_objs);
+	if (info.contig_block_size > 0) {
+		fprintf(f, "    get_success_blks=%"PRIu64"\n",
+			sum.get_success_blks);
+		fprintf(f, "    get_fail_blks=%"PRIu64"\n", sum.get_fail_blks);
+	}
 #else
 	fprintf(f, "  no statistics available\n");
 #endif
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 853f2da4d..1f59553b3 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -70,6 +70,10 @@ struct rte_mempool_debug_stats {
 	uint64_t get_success_objs; /**< Objects successfully allocated. */
 	uint64_t get_fail_bulk;    /**< Failed allocation number. */
 	uint64_t get_fail_objs;    /**< Objects that failed to be allocated. */
+	/** Successful allocation number of contiguous blocks. */
+	uint64_t get_success_blks;
+	/** Failed allocation number of contiguous blocks. */
+	uint64_t get_fail_blks;
 } __rte_cache_aligned;
 #endif
 
@@ -199,11 +203,8 @@ struct rte_mempool_memhdr {
  * a number of cases when something small is added.
  */
 struct rte_mempool_info {
-	/*
-	 * Dummy structure member to make it non emtpy until the first
-	 * real member is added.
-	 */
-	unsigned int dummy;
+	/** Number of objects in the contiguous block */
+	unsigned int contig_block_size;
 } __rte_cache_aligned;
 
 /**
@@ -282,8 +283,16 @@ struct rte_mempool {
 			mp->stats[__lcore_id].name##_bulk += 1;	\
 		}                                               \
 	} while(0)
+#define __MEMPOOL_CONTIG_BLOCKS_STAT_ADD(mp, name, n) do {                    \
+		unsigned int __lcore_id = rte_lcore_id();       \
+		if (__lcore_id < RTE_MAX_LCORE) {               \
+			mp->stats[__lcore_id].name##_blks += n;	\
+			mp->stats[__lcore_id].name##_bulk += 1;	\
+		}                                               \
+	} while (0)
 #else
 #define __MEMPOOL_STAT_ADD(mp, name, n) do {} while(0)
+#define __MEMPOOL_CONTIG_BLOCKS_STAT_ADD(mp, name, n) do {} while (0)
 #endif
 
 /**
@@ -351,6 +360,38 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 #define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
 #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * @internal Check contiguous object blocks and update cookies or panic.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param first_obj_table_const
+ *   Pointer to a table of void * pointers (first object of the contiguous
+ *   object blocks).
+ * @param n
+ *   Number of contiguous object blocks.
+ * @param free
+ *   - 0: object is supposed to be allocated, mark it as free
+ *   - 1: object is supposed to be free, mark it as allocated
+ *   - 2: just check that cookie is valid (free or allocated)
+ */
+void rte_mempool_contig_blocks_check_cookies(const struct rte_mempool *mp,
+	void * const *first_obj_table_const, unsigned int n, int free);
+
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+#define __mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \
+					      free) \
+	rte_mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \
+						free)
+#else
+#define __mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \
+					      free) \
+	do {} while (0)
+#endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
+
 #define RTE_MEMPOOL_OPS_NAMESIZE 32 /**< Max length of ops struct name. */
 
 /**
@@ -382,6 +423,15 @@ typedef int (*rte_mempool_enqueue_t)(struct rte_mempool *mp,
 typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
 		void **obj_table, unsigned int n);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Dequeue a number of contiquous object blocks from the external pool.
+ */
+typedef int (*rte_mempool_dequeue_contig_blocks_t)(struct rte_mempool *mp,
+		 void **first_obj_table, unsigned int n);
+
 /**
  * Return the number of available objects in the external pool.
  */
@@ -548,6 +598,10 @@ struct rte_mempool_ops {
 	 * Get mempool info
 	 */
 	rte_mempool_get_info_t get_info;
+	/**
+	 * Dequeue a number of contiguous object blocks.
+	 */
+	rte_mempool_dequeue_contig_blocks_t dequeue_contig_blocks;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -625,6 +679,30 @@ rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
 	return ops->dequeue(mp, obj_table, n);
 }
 
+/**
+ * @internal Wrapper for mempool_ops dequeue_contig_blocks callback.
+ *
+ * @param[in] mp
+ *   Pointer to the memory pool.
+ * @param[out] first_obj_table
+ *   Pointer to a table of void * pointers (first objects).
+ * @param[in] n
+ *   Number of blocks to get.
+ * @return
+ *   - 0: Success; got n objects.
+ *   - <0: Error; code of dequeue function.
+ */
+static inline int
+rte_mempool_ops_dequeue_contig_blocks(struct rte_mempool *mp,
+		void **first_obj_table, unsigned int n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+	RTE_ASSERT(ops->dequeue_contig_blocks != NULL);
+	return ops->dequeue_contig_blocks(mp, first_obj_table, n);
+}
+
 /**
  * @internal wrapper for mempool_ops enqueue callback.
  *
@@ -1539,6 +1617,49 @@ rte_mempool_get(struct rte_mempool *mp, void **obj_p)
 	return rte_mempool_get_bulk(mp, obj_p, 1);
 }
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Get a contiguous blocks of objects from the mempool.
+ *
+ * If cache is enabled, consider to flush it first, to reuse objects
+ * as soon as possible.
+ *
+ * The application should check that the driver supports the operation
+ * by calling rte_mempool_ops_get_info() and checking that `contig_block_size`
+ * is not zero.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param first_obj_table
+ *   A pointer to a pointer to the first object in each block.
+ * @param n
+ *   The number of blocks to get from mempool.
+ * @return
+ *   - 0: Success; blocks taken.
+ *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
+ *   - -EOPNOTSUPP: The mempool driver does not support block dequeue
+ */
+static __rte_always_inline int
+__rte_experimental
+rte_mempool_get_contig_blocks(struct rte_mempool *mp,
+			      void **first_obj_table, unsigned int n)
+{
+	int ret;
+
+	ret = rte_mempool_ops_dequeue_contig_blocks(mp, first_obj_table, n);
+	if (ret == 0) {
+		__MEMPOOL_CONTIG_BLOCKS_STAT_ADD(mp, get_success, n);
+		__mempool_contig_blocks_check_cookies(mp, first_obj_table, n,
+						      1);
+	} else {
+		__MEMPOOL_CONTIG_BLOCKS_STAT_ADD(mp, get_fail, n);
+	}
+
+	return ret;
+}
+
 /**
  * Return the number of entries in the mempool.
  *
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index efc1c084c..a27e1fa51 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -60,6 +60,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->calc_mem_size = h->calc_mem_size;
 	ops->populate = h->populate;
 	ops->get_info = h->get_info;
+	ops->dequeue_contig_blocks = h->dequeue_contig_blocks;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index c9d16ecc4..1c406b5b0 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -53,6 +53,7 @@ DPDK_17.11 {
 DPDK_18.05 {
 	global:
 
+	rte_mempool_contig_blocks_check_cookies;
 	rte_mempool_op_calc_mem_size_default;
 	rte_mempool_op_populate_default;
 
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v3 4/6] mempool/bucket: implement block dequeue operation
  2018-04-25 16:32 ` [PATCH v3 " Andrew Rybchenko
                     ` (2 preceding siblings ...)
  2018-04-25 16:32   ` [PATCH v3 3/6] mempool: support block dequeue operation Andrew Rybchenko
@ 2018-04-25 16:32   ` Andrew Rybchenko
  2018-04-25 16:32   ` [PATCH v3 5/6] mempool/bucket: do not allow one lcore to grab all buckets Andrew Rybchenko
  2018-04-25 16:32   ` [PATCH v3 6/6] doc: advertise bucket mempool driver Andrew Rybchenko
  5 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-25 16:32 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/mempool/bucket/rte_mempool_bucket.c | 52 +++++++++++++++++++++++++++++
 1 file changed, 52 insertions(+)

diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c
index ef822eb2a..24be24e96 100644
--- a/drivers/mempool/bucket/rte_mempool_bucket.c
+++ b/drivers/mempool/bucket/rte_mempool_bucket.c
@@ -294,6 +294,46 @@ bucket_dequeue(struct rte_mempool *mp, void **obj_table, unsigned int n)
 	return rc;
 }
 
+static int
+bucket_dequeue_contig_blocks(struct rte_mempool *mp, void **first_obj_table,
+			     unsigned int n)
+{
+	struct bucket_data *bd = mp->pool_data;
+	const uint32_t header_size = bd->header_size;
+	struct bucket_stack *cur_stack = bd->buckets[rte_lcore_id()];
+	unsigned int n_buckets_from_stack = RTE_MIN(n, cur_stack->top);
+	struct bucket_header *hdr;
+	void **first_objp = first_obj_table;
+
+	bucket_adopt_orphans(bd);
+
+	n -= n_buckets_from_stack;
+	while (n_buckets_from_stack-- > 0) {
+		hdr = bucket_stack_pop_unsafe(cur_stack);
+		*first_objp++ = (uint8_t *)hdr + header_size;
+	}
+	if (n > 0) {
+		if (unlikely(rte_ring_dequeue_bulk(bd->shared_bucket_ring,
+						   first_objp, n, NULL) != n)) {
+			/* Return the already dequeued buckets */
+			while (first_objp-- != first_obj_table) {
+				bucket_stack_push(cur_stack,
+						  (uint8_t *)*first_objp -
+						  header_size);
+			}
+			rte_errno = ENOBUFS;
+			return -rte_errno;
+		}
+		while (n-- > 0) {
+			hdr = (struct bucket_header *)*first_objp;
+			hdr->lcore_id = rte_lcore_id();
+			*first_objp++ = (uint8_t *)hdr + header_size;
+		}
+	}
+
+	return 0;
+}
+
 static void
 count_underfilled_buckets(struct rte_mempool *mp,
 			  void *opaque,
@@ -548,6 +588,16 @@ bucket_populate(struct rte_mempool *mp, unsigned int max_objs,
 	return n_objs;
 }
 
+static int
+bucket_get_info(const struct rte_mempool *mp, struct rte_mempool_info *info)
+{
+	struct bucket_data *bd = mp->pool_data;
+
+	info->contig_block_size = bd->obj_per_bucket;
+	return 0;
+}
+
+
 static const struct rte_mempool_ops ops_bucket = {
 	.name = "bucket",
 	.alloc = bucket_alloc,
@@ -557,6 +607,8 @@ static const struct rte_mempool_ops ops_bucket = {
 	.get_count = bucket_get_count,
 	.calc_mem_size = bucket_calc_mem_size,
 	.populate = bucket_populate,
+	.get_info = bucket_get_info,
+	.dequeue_contig_blocks = bucket_dequeue_contig_blocks,
 };
 
 
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v3 5/6] mempool/bucket: do not allow one lcore to grab all buckets
  2018-04-25 16:32 ` [PATCH v3 " Andrew Rybchenko
                     ` (3 preceding siblings ...)
  2018-04-25 16:32   ` [PATCH v3 4/6] mempool/bucket: implement " Andrew Rybchenko
@ 2018-04-25 16:32   ` Andrew Rybchenko
  2018-04-25 16:32   ` [PATCH v3 6/6] doc: advertise bucket mempool driver Andrew Rybchenko
  5 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-25 16:32 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/mempool/bucket/rte_mempool_bucket.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c
index 24be24e96..78d2b9d04 100644
--- a/drivers/mempool/bucket/rte_mempool_bucket.c
+++ b/drivers/mempool/bucket/rte_mempool_bucket.c
@@ -42,6 +42,7 @@ struct bucket_data {
 	unsigned int header_size;
 	unsigned int total_elt_size;
 	unsigned int obj_per_bucket;
+	unsigned int bucket_stack_thresh;
 	uintptr_t bucket_page_mask;
 	struct rte_ring *shared_bucket_ring;
 	struct bucket_stack *buckets[RTE_MAX_LCORE];
@@ -139,6 +140,7 @@ bucket_enqueue(struct rte_mempool *mp, void * const *obj_table,
 	       unsigned int n)
 {
 	struct bucket_data *bd = mp->pool_data;
+	struct bucket_stack *local_stack = bd->buckets[rte_lcore_id()];
 	unsigned int i;
 	int rc = 0;
 
@@ -146,6 +148,15 @@ bucket_enqueue(struct rte_mempool *mp, void * const *obj_table,
 		rc = bucket_enqueue_single(bd, obj_table[i]);
 		RTE_ASSERT(rc == 0);
 	}
+	if (local_stack->top > bd->bucket_stack_thresh) {
+		rte_ring_enqueue_bulk(bd->shared_bucket_ring,
+				      &local_stack->objects
+				      [bd->bucket_stack_thresh],
+				      local_stack->top -
+				      bd->bucket_stack_thresh,
+				      NULL);
+	    local_stack->top = bd->bucket_stack_thresh;
+	}
 	return rc;
 }
 
@@ -409,6 +420,8 @@ bucket_alloc(struct rte_mempool *mp)
 	bd->obj_per_bucket = (bd->bucket_mem_size - bucket_header_size) /
 		bd->total_elt_size;
 	bd->bucket_page_mask = ~(rte_align64pow2(bd->bucket_mem_size) - 1);
+	/* eventually this should be a tunable parameter */
+	bd->bucket_stack_thresh = (mp->size / bd->obj_per_bucket) * 4 / 3;
 
 	if (mp->flags & MEMPOOL_F_SP_PUT)
 		rg_flags |= RING_F_SP_ENQ;
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v3 6/6] doc: advertise bucket mempool driver
  2018-04-25 16:32 ` [PATCH v3 " Andrew Rybchenko
                     ` (4 preceding siblings ...)
  2018-04-25 16:32   ` [PATCH v3 5/6] mempool/bucket: do not allow one lcore to grab all buckets Andrew Rybchenko
@ 2018-04-25 16:32   ` Andrew Rybchenko
  2018-04-25 21:56     ` Thomas Monjalon
  5 siblings, 1 reply; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-25 16:32 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 doc/guides/rel_notes/release_18_05.rst | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index 222cc77cf..61dca6726 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -147,7 +147,15 @@ New Features
   compatible with virtio 0.95 and 1.0. This driver registers ifcvf vDPA driver
   to vhost lib, when virtio connected, with the help of the registered vDPA
   driver the assigned VF gets configured to Rx/Tx directly to VM's virtio
-  vrings.
+
+* **Added bucket mempool driver.**
+
+  Added bucket mempool driver which provides a way to allocate contiguous
+  block of objects.
+  Number of objects in the block depends on how many objects fit in
+  RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB memory chunk which is build time option.
+  The number may be obtained using rte_mempool_ops_get_info() API.
+  Contiguous blocks may be allocated using rte_mempool_get_contig_blocks() API.
 
 
 API Changes
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* Re: [PATCH v3 6/6] doc: advertise bucket mempool driver
  2018-04-25 16:32   ` [PATCH v3 6/6] doc: advertise bucket mempool driver Andrew Rybchenko
@ 2018-04-25 21:56     ` Thomas Monjalon
  2018-04-25 22:04       ` Thomas Monjalon
  0 siblings, 1 reply; 197+ messages in thread
From: Thomas Monjalon @ 2018-04-25 21:56 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev, Olivier MATZ

Usually it is better to update the release notes in the main patch
implementing the feature (probably the first one in this case).
You can also update it step by step in several patches.

25/04/2018 18:32, Andrew Rybchenko:
> --- a/doc/guides/rel_notes/release_18_05.rst
> +++ b/doc/guides/rel_notes/release_18_05.rst
> @@ -147,7 +147,15 @@ New Features
>    compatible with virtio 0.95 and 1.0. This driver registers ifcvf vDPA driver
>    to vhost lib, when virtio connected, with the help of the registered vDPA
>    driver the assigned VF gets configured to Rx/Tx directly to VM's virtio
> -  vrings.

Removing this last word is probably a mistake.

> +
> +* **Added bucket mempool driver.**
> +
> +  Added bucket mempool driver which provides a way to allocate contiguous
> +  block of objects.
> +  Number of objects in the block depends on how many objects fit in
> +  RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB memory chunk which is build time option.
> +  The number may be obtained using rte_mempool_ops_get_info() API.
> +  Contiguous blocks may be allocated using rte_mempool_get_contig_blocks() API.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v3 6/6] doc: advertise bucket mempool driver
  2018-04-25 21:56     ` Thomas Monjalon
@ 2018-04-25 22:04       ` Thomas Monjalon
  2018-04-26  9:50         ` Andrew Rybchenko
  0 siblings, 1 reply; 197+ messages in thread
From: Thomas Monjalon @ 2018-04-25 22:04 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev, Olivier MATZ

25/04/2018 23:56, Thomas Monjalon:
> Usually it is better to update the release notes in the main patch
> implementing the feature (probably the first one in this case).
> You can also update it step by step in several patches.
> 
> 25/04/2018 18:32, Andrew Rybchenko:
> > --- a/doc/guides/rel_notes/release_18_05.rst
> > +++ b/doc/guides/rel_notes/release_18_05.rst
> > @@ -147,7 +147,15 @@ New Features
> >    compatible with virtio 0.95 and 1.0. This driver registers ifcvf vDPA driver
> >    to vhost lib, when virtio connected, with the help of the registered vDPA
> >    driver the assigned VF gets configured to Rx/Tx directly to VM's virtio
> > -  vrings.
> 
> Removing this last word is probably a mistake.
> 
> > +
> > +* **Added bucket mempool driver.**
> > +
> > +  Added bucket mempool driver which provides a way to allocate contiguous
> > +  block of objects.
> > +  Number of objects in the block depends on how many objects fit in
> > +  RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB memory chunk which is build time option.
> > +  The number may be obtained using rte_mempool_ops_get_info() API.
> > +  Contiguous blocks may be allocated using rte_mempool_get_contig_blocks() API.

Please add this feature at the beginning of the list (as the first one).

If possible, I would prefer you rebase on top of the mainline
(looks to be next-net here).

Thanks and sorry for nit-picking, I'm testing your stamina :)

^ permalink raw reply	[flat|nested] 197+ messages in thread

* Re: [PATCH v3 6/6] doc: advertise bucket mempool driver
  2018-04-25 22:04       ` Thomas Monjalon
@ 2018-04-26  9:50         ` Andrew Rybchenko
  0 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-26  9:50 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, Olivier MATZ

On 04/26/2018 01:04 AM, Thomas Monjalon wrote:
> 25/04/2018 23:56, Thomas Monjalon:
>> Usually it is better to update the release notes in the main patch
>> implementing the feature (probably the first one in this case).
>> You can also update it step by step in several patches.

Yes, will do in v4.

>> 25/04/2018 18:32, Andrew Rybchenko:
>>> --- a/doc/guides/rel_notes/release_18_05.rst
>>> +++ b/doc/guides/rel_notes/release_18_05.rst
>>> @@ -147,7 +147,15 @@ New Features
>>>     compatible with virtio 0.95 and 1.0. This driver registers ifcvf vDPA driver
>>>     to vhost lib, when virtio connected, with the help of the registered vDPA
>>>     driver the assigned VF gets configured to Rx/Tx directly to VM's virtio
>>> -  vrings.
>> Removing this last word is probably a mistake.

My bad, very inaccurate rebase.

>>> +
>>> +* **Added bucket mempool driver.**
>>> +
>>> +  Added bucket mempool driver which provides a way to allocate contiguous
>>> +  block of objects.
>>> +  Number of objects in the block depends on how many objects fit in
>>> +  RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB memory chunk which is build time option.
>>> +  The number may be obtained using rte_mempool_ops_get_info() API.
>>> +  Contiguous blocks may be allocated using rte_mempool_get_contig_blocks() API.
> Please add this feature at the beginning of the list (as the first one).

Will do in v4

> If possible, I would prefer you rebase on top of the mainline
> (looks to be next-net here).

Yes, that's right. Will do in v4.

> Thanks and sorry for nit-picking, I'm testing your stamina :)

You're welcome. Thanks for review notes.

Andrew.

^ permalink raw reply	[flat|nested] 197+ messages in thread

* [PATCH v4 0/5] mempool: add bucket driver
  2017-11-24 16:06 [RFC PATCH 0/6] mempool: add bucket mempool driver Andrew Rybchenko
                   ` (10 preceding siblings ...)
  2018-04-25 16:32 ` [PATCH v3 " Andrew Rybchenko
@ 2018-04-26 10:59 ` Andrew Rybchenko
  2018-04-26 10:59   ` [PATCH v4 1/5] mempool/bucket: implement bucket mempool manager Andrew Rybchenko
                     ` (5 more replies)
  11 siblings, 6 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-26 10:59 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ

The initial patch series [1] (RFCv1 is [2]) is split into two to simplify
processing.  It is the second part which relies on the first one [3]
which is already applied.

The patch series adds bucket mempool driver which allows to allocate
(both physically and virtually) contiguous blocks of objects and adds
mempool API to do it. It is still capable to provide separate objects,
but it is definitely more heavy-weight than ring/stack drivers.
The driver will be used by the future Solarflare driver enhancements
which allow to utilize physical contiguous blocks in the NIC firmware.

The target usecase is dequeue in blocks and enqueue separate objects
back (which are collected in buckets to be dequeued). So, the memory
pool with bucket driver is created by an application and provided to
networking PMD receive queue. The choice of bucket driver is done using
rte_eth_dev_pool_ops_supported(). A PMD that relies upon contiguous
block allocation should report the bucket driver as the only supported
and preferred one.

Introduction of the contiguous block dequeue operation is proven by
performance measurements using autotest with minor enhancements:
 - in the original test bulks are powers of two, which is unacceptable
   for us, so they are changed to multiple of contig_block_size;
 - the test code is duplicated to support plain dequeue and
   dequeue_contig_blocks;
 - all the extra test variations (with/without cache etc) are eliminated;
 - a fake read from the dequeued buffer is added (in both cases) to
   simulate mbufs access.

start performance test for bucket (without cache)
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Srate_persec=   111935488
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Srate_persec=   115290931
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Srate_persec=   353055539
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Srate_persec=   353330790
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Srate_persec=   224657407
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Srate_persec=   230411468
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Srate_persec=   706700902
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Srate_persec=   703673139
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Srate_persec=   425236887
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Srate_persec=   437295512
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Srate_persec=  1343409356
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Srate_persec=  1336567397
start performance test for bucket (without cache + contiguous dequeue)
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Crate_persec=   122945536
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Crate_persec=   126458265
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Crate_persec=   374262988
mempool_autotest cache=   0 cores= 1 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Crate_persec=   377316966
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Crate_persec=   244842496
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Crate_persec=   251618917
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Crate_persec=   751226060
mempool_autotest cache=   0 cores= 2 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Crate_persec=   756233010
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  30 Crate_persec=   462068120
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=   1 n_keep=  60 Crate_persec=   476997221
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  30 Crate_persec=  1432171313
mempool_autotest cache=   0 cores= 4 n_get_bulk=  15 n_put_bulk=  15 n_keep=  60 Crate_persec=  1438829771

The number of objects in the contiguous block is a function of bucket
memory size (.config option) and total element size. In the future
additional API with possibility to pass parameters on mempool allocation
may be added.

It breaks ABI since changes rte_mempool_ops. The ABI version is already
bumped in [4].

I've double-checked that mempool_autotest and mempool_perf_autotest
work fine if EAL argument --mbuf-pool-ops-name=bucket is used.

mempool_perf_autotest as is for bucket shows less rate than ring_mp_mc
since test dequeue bulk sizes are not aligned to contintiguous block size
and bucket driver is optimized for contiguous blocks allocation (or at
least allocation in bulks multiple by contiguous block size).

However, real usage of the bucket driver even without contiguous block
dequeue (transmit only benchmark which simply generates traffic) shows
better packet rate. It looks like it is because the driver is
stack-based (per lcore without locks/barriers) and it improves cache
hit (working memory is smaller since it is a subset of the mempool
instead of entire mempool when some objects do not fit into mempool cache).

Unfortunately I've not finalized yet patches which allow to repeat above
measurements (done using hacks).

The driver is required for [5].


[1] https://dpdk.org/ml/archives/dev/2018-January/088698.html
[2] https://dpdk.org/ml/archives/dev/2017-November/082335.html
[3] https://dpdk.org/ml/archives/dev/2018-April/097354.html
[4] https://dpdk.org/ml/archives/dev/2018-April/097352.html
[5] https://dpdk.org/ml/archives/dev/2018-April/098089.html

v3 -> v4:
 - squash documentation into corresponding patches
 - move the feature release notes to top of features
 - rebase on top of the mainline instead of next-net

v2 -> v3:
 - rebase
 - align rte_mempool_info structure size to avoid ABI breakages in a
   number of cases when something relative small added
 - fix bug in get_count because of not counted objects in the
   adaptation rings
 - squash __mempool_generic_get_contig_blocks() into
   rte_mempool_get_contig_blocks()
 - fix typo in documentation

v1 -> v2:
  - just rebase

RFCv2 -> v1:
  - rebased on top of [3]
  - cleanup deprecation notice when it is done
  - mark a new API experimental
  - move contig blocks dequeue debug checks/processing to the library function
  - add contig blocks get stats
  - add release notes

RFCv1 -> RFCv2:
  - change info API to get information from driver required to
    API user to know contiguous block size
  - use SPDX tags
  - avoid all objects affinity to single lcore
  - fix bucket get_count
  - fix NO_CACHE_ALIGN case in bucket mempool

Artem V. Andreev (5):
  mempool/bucket: implement bucket mempool manager
  mempool: implement abstract mempool info API
  mempool: support block dequeue operation
  mempool/bucket: implement block dequeue operation
  mempool/bucket: do not allow one lcore to grab all buckets

 MAINTAINERS                                        |   9 +
 config/common_base                                 |   2 +
 doc/guides/rel_notes/deprecation.rst               |   7 -
 doc/guides/rel_notes/release_18_05.rst             |   9 +
 drivers/mempool/Makefile                           |   1 +
 drivers/mempool/bucket/Makefile                    |  27 +
 drivers/mempool/bucket/meson.build                 |   9 +
 drivers/mempool/bucket/rte_mempool_bucket.c        | 628 +++++++++++++++++++++
 .../mempool/bucket/rte_mempool_bucket_version.map  |   4 +
 lib/librte_mempool/Makefile                        |   1 +
 lib/librte_mempool/meson.build                     |   2 +
 lib/librte_mempool/rte_mempool.c                   |  39 ++
 lib/librte_mempool/rte_mempool.h                   | 171 ++++++
 lib/librte_mempool/rte_mempool_ops.c               |  16 +
 lib/librte_mempool/rte_mempool_version.map         |   8 +
 mk/rte.app.mk                                      |   1 +
 16 files changed, 927 insertions(+), 7 deletions(-)
 create mode 100644 drivers/mempool/bucket/Makefile
 create mode 100644 drivers/mempool/bucket/meson.build
 create mode 100644 drivers/mempool/bucket/rte_mempool_bucket.c
 create mode 100644 drivers/mempool/bucket/rte_mempool_bucket_version.map

-- 
2.14.1

^ permalink raw reply	[flat|nested] 197+ messages in thread

* [PATCH v4 1/5] mempool/bucket: implement bucket mempool manager
  2018-04-26 10:59 ` [PATCH v4 0/5] mempool: add bucket driver Andrew Rybchenko
@ 2018-04-26 10:59   ` Andrew Rybchenko
  2018-04-26 10:59   ` [PATCH v4 2/5] mempool: implement abstract mempool info API Andrew Rybchenko
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-26 10:59 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

The manager provides a way to allocate physically and virtually
contiguous set of objects.

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 MAINTAINERS                                        |   9 +
 config/common_base                                 |   2 +
 doc/guides/rel_notes/release_18_05.rst             |   7 +
 drivers/mempool/Makefile                           |   1 +
 drivers/mempool/bucket/Makefile                    |  27 +
 drivers/mempool/bucket/meson.build                 |   9 +
 drivers/mempool/bucket/rte_mempool_bucket.c        | 563 +++++++++++++++++++++
 .../mempool/bucket/rte_mempool_bucket_version.map  |   4 +
 mk/rte.app.mk                                      |   1 +
 9 files changed, 623 insertions(+)
 create mode 100644 drivers/mempool/bucket/Makefile
 create mode 100644 drivers/mempool/bucket/meson.build
 create mode 100644 drivers/mempool/bucket/rte_mempool_bucket.c
 create mode 100644 drivers/mempool/bucket/rte_mempool_bucket_version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 6f0235159..c0f5014c3 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -364,6 +364,15 @@ F: test/test/test_rawdev.c
 F: doc/guides/prog_guide/rawdev.rst
 
 
+Memory Pool Drivers
+-------------------
+
+Bucket memory pool
+M: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
+M: Andrew Rybchenko <arybchenko@solarflare.com>
+F: drivers/mempool/bucket/
+
+
 Bus Drivers
 -----------
 
diff --git a/config/common_base b/config/common_base
index 7e4541244..f24417cb0 100644
--- a/config/common_base
+++ b/config/common_base
@@ -632,6 +632,8 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
 #
 # Compile Mempool drivers
 #
+CONFIG_RTE_DRIVER_MEMPOOL_BUCKET=y
+CONFIG_RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB=64
 CONFIG_RTE_DRIVER_MEMPOOL_RING=y
 CONFIG_RTE_DRIVER_MEMPOOL_STACK=y
 
diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index 7c135a161..3d56431cc 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -41,6 +41,13 @@ New Features
      Also, make sure to start the actual text at the margin.
      =========================================================
 
+* **Added bucket mempool driver.**
+
+  Added bucket mempool driver which provides a way to allocate contiguous
+  block of objects.
+  Number of objects in the block depends on how many objects fit in
+  RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB memory chunk which is build time option.
+
 * **Added PMD-recommended Tx and Rx parameters**
 
   Applications can now query drivers for device-tuned values of
diff --git a/drivers/mempool/Makefile b/drivers/mempool/Makefile
index fc8b73b38..28c2e8360 100644
--- a/drivers/mempool/Makefile
+++ b/drivers/mempool/Makefile
@@ -3,6 +3,7 @@
 
 include $(RTE_SDK)/mk/rte.vars.mk
 
+DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_BUCKET) += bucket
 ifeq ($(CONFIG_RTE_LIBRTE_DPAA_BUS),y)
 DIRS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL) += dpaa
 endif
diff --git a/drivers/mempool/bucket/Makefile b/drivers/mempool/bucket/Makefile
new file mode 100644
index 000000000..7364916bc
--- /dev/null
+++ b/drivers/mempool/bucket/Makefile
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: BSD-3-Clause
+#
+# Copyright (c) 2017-2018 Solarflare Communications Inc.
+# All rights reserved.
+#
+# This software was jointly developed between OKTET Labs (under contract
+# for Solarflare) and Solarflare Communications, Inc.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_mempool_bucket.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+LDLIBS += -lrte_eal -lrte_mempool -lrte_ring
+
+EXPORT_MAP := rte_mempool_bucket_version.map
+
+LIBABIVER := 1
+
+SRCS-$(CONFIG_RTE_DRIVER_MEMPOOL_BUCKET) += rte_mempool_bucket.c
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/mempool/bucket/meson.build b/drivers/mempool/bucket/meson.build
new file mode 100644
index 000000000..618d79128
--- /dev/null
+++ b/drivers/mempool/bucket/meson.build
@@ -0,0 +1,9 @@
+# SPDX-License-Identifier: BSD-3-Clause
+#
+# Copyright (c) 2017-2018 Solarflare Communications Inc.
+# All rights reserved.
+#
+# This software was jointly developed between OKTET Labs (under contract
+# for Solarflare) and Solarflare Communications, Inc.
+
+sources = files('rte_mempool_bucket.c')
diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c
new file mode 100644
index 000000000..ef822eb2a
--- /dev/null
+++ b/drivers/mempool/bucket/rte_mempool_bucket.c
@@ -0,0 +1,563 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright (c) 2017-2018 Solarflare Communications Inc.
+ * All rights reserved.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+#include <stdbool.h>
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+
+/*
+ * The general idea of the bucket mempool driver is as follows.
+ * We keep track of physically contiguous groups (buckets) of objects
+ * of a certain size. Every such a group has a counter that is
+ * incremented every time an object from that group is enqueued.
+ * Until the bucket is full, no objects from it are eligible for allocation.
+ * If a request is made to dequeue a multiply of bucket size, it is
+ * satisfied by returning the whole buckets, instead of separate objects.
+ */
+
+
+struct bucket_header {
+	unsigned int lcore_id;
+	uint8_t fill_cnt;
+};
+
+struct bucket_stack {
+	unsigned int top;
+	unsigned int limit;
+	void *objects[];
+};
+
+struct bucket_data {
+	unsigned int header_size;
+	unsigned int total_elt_size;
+	unsigned int obj_per_bucket;
+	uintptr_t bucket_page_mask;
+	struct rte_ring *shared_bucket_ring;
+	struct bucket_stack *buckets[RTE_MAX_LCORE];
+	/*
+	 * Multi-producer single-consumer ring to hold objects that are
+	 * returned to the mempool at a different lcore than initially
+	 * dequeued
+	 */
+	struct rte_ring *adoption_buffer_rings[RTE_MAX_LCORE];
+	struct rte_ring *shared_orphan_ring;
+	struct rte_mempool *pool;
+	unsigned int bucket_mem_size;
+};
+
+static struct bucket_stack *
+bucket_stack_create(const struct rte_mempool *mp, unsigned int n_elts)
+{
+	struct bucket_stack *stack;
+
+	stack = rte_zmalloc_socket("bucket_stack",
+				   sizeof(struct bucket_stack) +
+				   n_elts * sizeof(void *),
+				   RTE_CACHE_LINE_SIZE,
+				   mp->socket_id);
+	if (stack == NULL)
+		return NULL;
+	stack->limit = n_elts;
+	stack->top = 0;
+
+	return stack;
+}
+
+static void
+bucket_stack_push(struct bucket_stack *stack, void *obj)
+{
+	RTE_ASSERT(stack->top < stack->limit);
+	stack->objects[stack->top++] = obj;
+}
+
+static void *
+bucket_stack_pop_unsafe(struct bucket_stack *stack)
+{
+	RTE_ASSERT(stack->top > 0);
+	return stack->objects[--stack->top];
+}
+
+static void *
+bucket_stack_pop(struct bucket_stack *stack)
+{
+	if (stack->top == 0)
+		return NULL;
+	return bucket_stack_pop_unsafe(stack);
+}
+
+static int
+bucket_enqueue_single(struct bucket_data *bd, void *obj)
+{
+	int rc = 0;
+	uintptr_t addr = (uintptr_t)obj;
+	struct bucket_header *hdr;
+	unsigned int lcore_id = rte_lcore_id();
+
+	addr &= bd->bucket_page_mask;
+	hdr = (struct bucket_header *)addr;
+
+	if (likely(hdr->lcore_id == lcore_id)) {
+		if (hdr->fill_cnt < bd->obj_per_bucket - 1) {
+			hdr->fill_cnt++;
+		} else {
+			hdr->fill_cnt = 0;
+			/* Stack is big enough to put all buckets */
+			bucket_stack_push(bd->buckets[lcore_id], hdr);
+		}
+	} else if (hdr->lcore_id != LCORE_ID_ANY) {
+		struct rte_ring *adopt_ring =
+			bd->adoption_buffer_rings[hdr->lcore_id];
+
+		rc = rte_ring_enqueue(adopt_ring, obj);
+		/* Ring is big enough to put all objects */
+		RTE_ASSERT(rc == 0);
+	} else if (hdr->fill_cnt < bd->obj_per_bucket - 1) {
+		hdr->fill_cnt++;
+	} else {
+		hdr->fill_cnt = 0;
+		rc = rte_ring_enqueue(bd->shared_bucket_ring, hdr);
+		/* Ring is big enough to put all buckets */
+		RTE_ASSERT(rc == 0);
+	}
+
+	return rc;
+}
+
+static int
+bucket_enqueue(struct rte_mempool *mp, void * const *obj_table,
+	       unsigned int n)
+{
+	struct bucket_data *bd = mp->pool_data;
+	unsigned int i;
+	int rc = 0;
+
+	for (i = 0; i < n; i++) {
+		rc = bucket_enqueue_single(bd, obj_table[i]);
+		RTE_ASSERT(rc == 0);
+	}
+	return rc;
+}
+
+static void **
+bucket_fill_obj_table(const struct bucket_data *bd, void **pstart,
+		      void **obj_table, unsigned int n)
+{
+	unsigned int i;
+	uint8_t *objptr = *pstart;
+
+	for (objptr += bd->header_size, i = 0; i < n;
+	     i++, objptr += bd->total_elt_size)
+		*obj_table++ = objptr;
+	*pstart = objptr;
+	return obj_table;
+}
+
+static int
+bucket_dequeue_orphans(struct bucket_data *bd, void **obj_table,
+		       unsigned int n_orphans)
+{
+	unsigned int i;
+	int rc;
+	uint8_t *objptr;
+
+	rc = rte_ring_dequeue_bulk(bd->shared_orphan_ring, obj_table,
+				   n_orphans, NULL);
+	if (unlikely(rc != (int)n_orphans)) {
+		struct bucket_header *hdr;
+
+		objptr = bucket_stack_pop(bd->buckets[rte_lcore_id()]);
+		hdr = (struct bucket_header *)objptr;
+
+		if (objptr == NULL) {
+			rc = rte_ring_dequeue(bd->shared_bucket_ring,
+					      (void **)&objptr);
+			if (rc != 0) {
+				rte_errno = ENOBUFS;
+				return -rte_errno;
+			}
+			hdr = (struct bucket_header *)objptr;
+			hdr->lcore_id = rte_lcore_id();
+		}
+		hdr->fill_cnt = 0;
+		bucket_fill_obj_table(bd, (void **)&objptr, obj_table,
+				      n_orphans);
+		for (i = n_orphans; i < bd->obj_per_bucket; i++,
+			     objptr += bd->total_elt_size) {
+			rc = rte_ring_enqueue(bd->shared_orphan_ring,
+					      objptr);
+			if (rc != 0) {
+				RTE_ASSERT(0);
+				rte_errno = -rc;
+				return rc;
+			}
+		}
+	}
+
+	return 0;
+}
+
+static int
+bucket_dequeue_buckets(struct bucket_data *bd, void **obj_table,
+		       unsigned int n_buckets)
+{
+	struct bucket_stack *cur_stack = bd->buckets[rte_lcore_id()];
+	unsigned int n_buckets_from_stack = RTE_MIN(n_buckets, cur_stack->top);
+	void **obj_table_base = obj_table;
+
+	n_buckets -= n_buckets_from_stack;
+	while (n_buckets_from_stack-- > 0) {
+		void *obj = bucket_stack_pop_unsafe(cur_stack);
+
+		obj_table = bucket_fill_obj_table(bd, &obj, obj_table,
+						  bd->obj_per_bucket);
+	}
+	while (n_buckets-- > 0) {
+		struct bucket_header *hdr;
+
+		if (unlikely(rte_ring_dequeue(bd->shared_bucket_ring,
+					      (void **)&hdr) != 0)) {
+			/*
+			 * Return the already-dequeued buffers
+			 * back to the mempool
+			 */
+			bucket_enqueue(bd->pool, obj_table_base,
+				       obj_table - obj_table_base);
+			rte_errno = ENOBUFS;
+			return -rte_errno;
+		}
+		hdr->lcore_id = rte_lcore_id();
+		obj_table = bucket_fill_obj_table(bd, (void **)&hdr,
+						  obj_table,
+						  bd->obj_per_bucket);
+	}
+
+	return 0;
+}
+
+static int
+bucket_adopt_orphans(struct bucket_data *bd)
+{
+	int rc = 0;
+	struct rte_ring *adopt_ring =
+		bd->adoption_buffer_rings[rte_lcore_id()];
+
+	if (unlikely(!rte_ring_empty(adopt_ring))) {
+		void *orphan;
+
+		while (rte_ring_sc_dequeue(adopt_ring, &orphan) == 0) {
+			rc = bucket_enqueue_single(bd, orphan);
+			RTE_ASSERT(rc == 0);
+		}
+	}
+	return rc;
+}
+
+static int
+bucket_dequeue(struct rte_mempool *mp, void **obj_table, unsigned int n)
+{
+	struct bucket_data *bd = mp->pool_data;
+	unsigned int n_buckets = n / bd->obj_per_bucket;
+	unsigned int n_orphans = n - n_buckets * bd->obj_per_bucket;
+	int rc = 0;
+
+	bucket_adopt_orphans(bd);
+
+	if (unlikely(n_orphans > 0)) {
+		rc = bucket_dequeue_orphans(bd, obj_table +
+					    (n_buckets * bd->obj_per_bucket),
+					    n_orphans);
+		if (rc != 0)
+			return rc;
+	}
+
+	if (likely(n_buckets > 0)) {
+		rc = bucket_dequeue_buckets(bd, obj_table, n_buckets);
+		if (unlikely(rc != 0) && n_orphans > 0) {
+			rte_ring_enqueue_bulk(bd->shared_orphan_ring,
+					      obj_table + (n_buckets *
+							   bd->obj_per_bucket),
+					      n_orphans, NULL);
+		}
+	}
+
+	return rc;
+}
+
+static void
+count_underfilled_buckets(struct rte_mempool *mp,
+			  void *opaque,
+			  struct rte_mempool_memhdr *memhdr,
+			  __rte_unused unsigned int mem_idx)
+{
+	unsigned int *pcount = opaque;
+	const struct bucket_data *bd = mp->pool_data;
+	unsigned int bucket_page_sz =
+		(unsigned int)(~bd->bucket_page_mask + 1);
+	uintptr_t align;
+	uint8_t *iter;
+
+	align = (uintptr_t)RTE_PTR_ALIGN_CEIL(memhdr->addr, bucket_page_sz) -
+		(uintptr_t)memhdr->addr;
+
+	for (iter = (uint8_t *)memhdr->addr + align;
+	     iter < (uint8_t *)memhdr->addr + memhdr->len;
+	     iter += bucket_page_sz) {
+		struct bucket_header *hdr = (struct bucket_header *)iter;
+
+		*pcount += hdr->fill_cnt;
+	}
+}
+
+static unsigned int
+bucket_get_count(const struct rte_mempool *mp)
+{
+	const struct bucket_data *bd = mp->pool_data;
+	unsigned int count =
+		bd->obj_per_bucket * rte_ring_count(bd->shared_bucket_ring) +
+		rte_ring_count(bd->shared_orphan_ring);
+	unsigned int i;
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		if (!rte_lcore_is_enabled(i))
+			continue;
+		count += bd->obj_per_bucket * bd->buckets[i]->top +
+			rte_ring_count(bd->adoption_buffer_rings[i]);
+	}
+
+	rte_mempool_mem_iter((struct rte_mempool *)(uintptr_t)mp,
+			     count_underfilled_buckets, &count);
+
+	return count;
+}
+
+static int
+bucket_alloc(struct rte_mempool *mp)
+{
+	int rg_flags = 0;
+	int rc = 0;
+	char rg_name[RTE_RING_NAMESIZE];
+	struct bucket_data *bd;
+	unsigned int i;
+	unsigned int bucket_header_size;
+
+	bd = rte_zmalloc_socket("bucket_pool", sizeof(*bd),
+				RTE_CACHE_LINE_SIZE, mp->socket_id);
+	if (bd == NULL) {
+		rc = -ENOMEM;
+		goto no_mem_for_data;
+	}
+	bd->pool = mp;
+	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+		bucket_header_size = sizeof(struct bucket_header);
+	else
+		bucket_header_size = RTE_CACHE_LINE_SIZE;
+	RTE_BUILD_BUG_ON(sizeof(struct bucket_header) > RTE_CACHE_LINE_SIZE);
+	bd->header_size = mp->header_size + bucket_header_size;
+	bd->total_elt_size = mp->header_size + mp->elt_size + mp->trailer_size;
+	bd->bucket_mem_size = RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB * 1024;
+	bd->obj_per_bucket = (bd->bucket_mem_size - bucket_header_size) /
+		bd->total_elt_size;
+	bd->bucket_page_mask = ~(rte_align64pow2(bd->bucket_mem_size) - 1);
+
+	if (mp->flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (mp->flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		if (!rte_lcore_is_enabled(i))
+			continue;
+		bd->buckets[i] =
+			bucket_stack_create(mp, mp->size / bd->obj_per_bucket);
+		if (bd->buckets[i] == NULL) {
+			rc = -ENOMEM;
+			goto no_mem_for_stacks;
+		}
+		rc = snprintf(rg_name, sizeof(rg_name),
+			      RTE_MEMPOOL_MZ_FORMAT ".a%u", mp->name, i);
+		if (rc < 0 || rc >= (int)sizeof(rg_name)) {
+			rc = -ENAMETOOLONG;
+			goto no_mem_for_stacks;
+		}
+		bd->adoption_buffer_rings[i] =
+			rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+					mp->socket_id,
+					rg_flags | RING_F_SC_DEQ);
+		if (bd->adoption_buffer_rings[i] == NULL) {
+			rc = -rte_errno;
+			goto no_mem_for_stacks;
+		}
+	}
+
+	rc = snprintf(rg_name, sizeof(rg_name),
+		      RTE_MEMPOOL_MZ_FORMAT ".0", mp->name);
+	if (rc < 0 || rc >= (int)sizeof(rg_name)) {
+		rc = -ENAMETOOLONG;
+		goto invalid_shared_orphan_ring;
+	}
+	bd->shared_orphan_ring =
+		rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+				mp->socket_id, rg_flags);
+	if (bd->shared_orphan_ring == NULL) {
+		rc = -rte_errno;
+		goto cannot_create_shared_orphan_ring;
+	}
+
+	rc = snprintf(rg_name, sizeof(rg_name),
+		       RTE_MEMPOOL_MZ_FORMAT ".1", mp->name);
+	if (rc < 0 || rc >= (int)sizeof(rg_name)) {
+		rc = -ENAMETOOLONG;
+		goto invalid_shared_bucket_ring;
+	}
+	bd->shared_bucket_ring =
+		rte_ring_create(rg_name,
+				rte_align32pow2((mp->size + 1) /
+						bd->obj_per_bucket),
+				mp->socket_id, rg_flags);
+	if (bd->shared_bucket_ring == NULL) {
+		rc = -rte_errno;
+		goto cannot_create_shared_bucket_ring;
+	}
+
+	mp->pool_data = bd;
+
+	return 0;
+
+cannot_create_shared_bucket_ring:
+invalid_shared_bucket_ring:
+	rte_ring_free(bd->shared_orphan_ring);
+cannot_create_shared_orphan_ring:
+invalid_shared_orphan_ring:
+no_mem_for_stacks:
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		rte_free(bd->buckets[i]);
+		rte_ring_free(bd->adoption_buffer_rings[i]);
+	}
+	rte_free(bd);
+no_mem_for_data:
+	rte_errno = -rc;
+	return rc;
+}
+
+static void
+bucket_free(struct rte_mempool *mp)
+{
+	unsigned int i;
+	struct bucket_data *bd = mp->pool_data;
+
+	if (bd == NULL)
+		return;
+
+	for (i = 0; i < RTE_MAX_LCORE; i++) {
+		rte_free(bd->buckets[i]);
+		rte_ring_free(bd->adoption_buffer_rings[i]);
+	}
+
+	rte_ring_free(bd->shared_orphan_ring);
+	rte_ring_free(bd->shared_bucket_ring);
+
+	rte_free(bd);
+}
+
+static ssize_t
+bucket_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num,
+		     __rte_unused uint32_t pg_shift, size_t *min_total_elt_size,
+		     size_t *align)
+{
+	struct bucket_data *bd = mp->pool_data;
+	unsigned int bucket_page_sz;
+
+	if (bd == NULL)
+		return -EINVAL;
+
+	bucket_page_sz = rte_align32pow2(bd->bucket_mem_size);
+	*align = bucket_page_sz;
+	*min_total_elt_size = bucket_page_sz;
+	/*
+	 * Each bucket occupies its own block aligned to
+	 * bucket_page_sz, so the required amount of memory is
+	 * a multiple of bucket_page_sz.
+	 * We also need extra space for a bucket header
+	 */
+	return ((obj_num + bd->obj_per_bucket - 1) /
+		bd->obj_per_bucket) * bucket_page_sz;
+}
+
+static int
+bucket_populate(struct rte_mempool *mp, unsigned int max_objs,
+		void *vaddr, rte_iova_t iova, size_t len,
+		rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
+{
+	struct bucket_data *bd = mp->pool_data;
+	unsigned int bucket_page_sz;
+	unsigned int bucket_header_sz;
+	unsigned int n_objs;
+	uintptr_t align;
+	uint8_t *iter;
+	int rc;
+
+	if (bd == NULL)
+		return -EINVAL;
+
+	bucket_page_sz = rte_align32pow2(bd->bucket_mem_size);
+	align = RTE_PTR_ALIGN_CEIL((uintptr_t)vaddr, bucket_page_sz) -
+		(uintptr_t)vaddr;
+
+	bucket_header_sz = bd->header_size - mp->header_size;
+	if (iova != RTE_BAD_IOVA)
+		iova += align + bucket_header_sz;
+
+	for (iter = (uint8_t *)vaddr + align, n_objs = 0;
+	     iter < (uint8_t *)vaddr + len && n_objs < max_objs;
+	     iter += bucket_page_sz) {
+		struct bucket_header *hdr = (struct bucket_header *)iter;
+		unsigned int chunk_len = bd->bucket_mem_size;
+
+		if ((size_t)(iter - (uint8_t *)vaddr) + chunk_len > len)
+			chunk_len = len - (iter - (uint8_t *)vaddr);
+		if (chunk_len <= bucket_header_sz)
+			break;
+		chunk_len -= bucket_header_sz;
+
+		hdr->fill_cnt = 0;
+		hdr->lcore_id = LCORE_ID_ANY;
+		rc = rte_mempool_op_populate_default(mp,
+						     RTE_MIN(bd->obj_per_bucket,
+							     max_objs - n_objs),
+						     iter + bucket_header_sz,
+						     iova, chunk_len,
+						     obj_cb, obj_cb_arg);
+		if (rc < 0)
+			return rc;
+		n_objs += rc;
+		if (iova != RTE_BAD_IOVA)
+			iova += bucket_page_sz;
+	}
+
+	return n_objs;
+}
+
+static const struct rte_mempool_ops ops_bucket = {
+	.name = "bucket",
+	.alloc = bucket_alloc,
+	.free = bucket_free,
+	.enqueue = bucket_enqueue,
+	.dequeue = bucket_dequeue,
+	.get_count = bucket_get_count,
+	.calc_mem_size = bucket_calc_mem_size,
+	.populate = bucket_populate,
+};
+
+
+MEMPOOL_REGISTER_OPS(ops_bucket);
diff --git a/drivers/mempool/bucket/rte_mempool_bucket_version.map b/drivers/mempool/bucket/rte_mempool_bucket_version.map
new file mode 100644
index 000000000..9b9ab1a4c
--- /dev/null
+++ b/drivers/mempool/bucket/rte_mempool_bucket_version.map
@@ -0,0 +1,4 @@
+DPDK_18.05 {
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index a14579140..1324f19cc 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -125,6 +125,7 @@ endif
 ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),n)
 # plugins (link only if static libraries)
 
+_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_BUCKET) += -lrte_mempool_bucket
 _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_STACK)  += -lrte_mempool_stack
 ifeq ($(CONFIG_RTE_LIBRTE_DPAA_BUS),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL)   += -lrte_mempool_dpaa
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v4 2/5] mempool: implement abstract mempool info API
  2018-04-26 10:59 ` [PATCH v4 0/5] mempool: add bucket driver Andrew Rybchenko
  2018-04-26 10:59   ` [PATCH v4 1/5] mempool/bucket: implement bucket mempool manager Andrew Rybchenko
@ 2018-04-26 10:59   ` Andrew Rybchenko
  2018-04-26 10:59   ` [PATCH v4 3/5] mempool: support block dequeue operation Andrew Rybchenko
                     ` (3 subsequent siblings)
  5 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-26 10:59 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Primarily, it is intended as a way for the mempool driver to provide
additional information on how it lays up objects inside the mempool.

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 lib/librte_mempool/rte_mempool.h           | 50 ++++++++++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c       | 15 +++++++++
 lib/librte_mempool/rte_mempool_version.map |  7 +++++
 3 files changed, 72 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 3e06ae051..853f2da4d 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -189,6 +189,23 @@ struct rte_mempool_memhdr {
 	void *opaque;            /**< Argument passed to the free callback */
 };
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Additional information about the mempool
+ *
+ * The structure is cache-line aligned to avoid ABI breakages in
+ * a number of cases when something small is added.
+ */
+struct rte_mempool_info {
+	/*
+	 * Dummy structure member to make it non emtpy until the first
+	 * real member is added.
+	 */
+	unsigned int dummy;
+} __rte_cache_aligned;
+
 /**
  * The RTE mempool structure.
  */
@@ -499,6 +516,16 @@ int rte_mempool_op_populate_default(struct rte_mempool *mp,
 		void *vaddr, rte_iova_t iova, size_t len,
 		rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Get some additional information about a mempool.
+ */
+typedef int (*rte_mempool_get_info_t)(const struct rte_mempool *mp,
+		struct rte_mempool_info *info);
+
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -517,6 +544,10 @@ struct rte_mempool_ops {
 	 * provided memory chunk.
 	 */
 	rte_mempool_populate_t populate;
+	/**
+	 * Get mempool info
+	 */
+	rte_mempool_get_info_t get_info;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -679,6 +710,25 @@ int rte_mempool_ops_populate(struct rte_mempool *mp, unsigned int max_objs,
 			     rte_mempool_populate_obj_cb_t *obj_cb,
 			     void *obj_cb_arg);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Wrapper for mempool_ops get_info callback.
+ *
+ * @param[in] mp
+ *   Pointer to the memory pool.
+ * @param[out] info
+ *   Pointer to the rte_mempool_info structure
+ * @return
+ *   - 0: Success; The mempool driver supports retrieving supplementary
+ *        mempool information
+ *   - -ENOTSUP - doesn't support get_info ops (valid case).
+ */
+__rte_experimental
+int rte_mempool_ops_get_info(const struct rte_mempool *mp,
+			 struct rte_mempool_info *info);
+
 /**
  * @internal wrapper for mempool_ops free callback.
  *
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index ea9be1eb2..efc1c084c 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -59,6 +59,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->get_count = h->get_count;
 	ops->calc_mem_size = h->calc_mem_size;
 	ops->populate = h->populate;
+	ops->get_info = h->get_info;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -134,6 +135,20 @@ rte_mempool_ops_populate(struct rte_mempool *mp, unsigned int max_objs,
 			     obj_cb_arg);
 }
 
+/* wrapper to get additional mempool info */
+int
+rte_mempool_ops_get_info(const struct rte_mempool *mp,
+			 struct rte_mempool_info *info)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->get_info, -ENOTSUP);
+	return ops->get_info(mp, info);
+}
+
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index cf375dbe6..c9d16ecc4 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -57,3 +57,10 @@ DPDK_18.05 {
 	rte_mempool_op_populate_default;
 
 } DPDK_17.11;
+
+EXPERIMENTAL {
+	global:
+
+	rte_mempool_ops_get_info;
+
+} DPDK_18.05;
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v4 3/5] mempool: support block dequeue operation
  2018-04-26 10:59 ` [PATCH v4 0/5] mempool: add bucket driver Andrew Rybchenko
  2018-04-26 10:59   ` [PATCH v4 1/5] mempool/bucket: implement bucket mempool manager Andrew Rybchenko
  2018-04-26 10:59   ` [PATCH v4 2/5] mempool: implement abstract mempool info API Andrew Rybchenko
@ 2018-04-26 10:59   ` Andrew Rybchenko
  2018-04-26 10:59   ` [PATCH v4 4/5] mempool/bucket: implement " Andrew Rybchenko
                     ` (2 subsequent siblings)
  5 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-26 10:59 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

If mempool manager supports object blocks (physically and virtual
contiguous set of objects), it is sufficient to get the first
object only and the function allows to avoid filling in of
information about each block member.

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 doc/guides/rel_notes/deprecation.rst       |   7 --
 lib/librte_mempool/Makefile                |   1 +
 lib/librte_mempool/meson.build             |   2 +
 lib/librte_mempool/rte_mempool.c           |  39 +++++++++
 lib/librte_mempool/rte_mempool.h           | 131 +++++++++++++++++++++++++++--
 lib/librte_mempool/rte_mempool_ops.c       |   1 +
 lib/librte_mempool/rte_mempool_version.map |   1 +
 7 files changed, 170 insertions(+), 12 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 72ab33cb7..da156c3cc 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -42,13 +42,6 @@ Deprecation Notices
 
   - ``rte_eal_mbuf_default_mempool_ops``
 
-* mempool: several API and ABI changes are planned in v18.05.
-
-  The following changes are planned:
-
-  - addition of new op to allocate contiguous
-    block of objects if underlying driver supports it.
-
 * mbuf: The opaque ``mbuf->hash.sched`` field will be updated to support generic
   definition in line with the ethdev TM and MTR APIs. Currently, this field
   is defined in librte_sched in a non-generic way. The new generic format
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 7f19f005a..e3c32b14f 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -10,6 +10,7 @@ CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
 # Allow deprecated symbol to use deprecated rte_mempool_populate_iova_tab()
 # from earlier deprecated rte_mempool_populate_phys_tab()
 CFLAGS += -Wno-deprecated-declarations
+CFLAGS += -DALLOW_EXPERIMENTAL_API
 LDLIBS += -lrte_eal -lrte_ring
 
 EXPORT_MAP := rte_mempool_version.map
diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build
index baf2d24d5..d507e5511 100644
--- a/lib/librte_mempool/meson.build
+++ b/lib/librte_mempool/meson.build
@@ -1,6 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2017 Intel Corporation
 
+allow_experimental_apis = true
+
 extra_flags = []
 
 # Allow deprecated symbol to use deprecated rte_mempool_populate_iova_tab()
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 84b3d640f..cf5d124ec 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -1255,6 +1255,36 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 #endif
 }
 
+void
+rte_mempool_contig_blocks_check_cookies(const struct rte_mempool *mp,
+	void * const *first_obj_table_const, unsigned int n, int free)
+{
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	struct rte_mempool_info info;
+	const size_t total_elt_sz =
+		mp->header_size + mp->elt_size + mp->trailer_size;
+	unsigned int i, j;
+
+	rte_mempool_ops_get_info(mp, &info);
+
+	for (i = 0; i < n; ++i) {
+		void *first_obj = first_obj_table_const[i];
+
+		for (j = 0; j < info.contig_block_size; ++j) {
+			void *obj;
+
+			obj = (void *)((uintptr_t)first_obj + j * total_elt_sz);
+			rte_mempool_check_cookies(mp, &obj, 1, free);
+		}
+	}
+#else
+	RTE_SET_USED(mp);
+	RTE_SET_USED(first_obj_table_const);
+	RTE_SET_USED(n);
+	RTE_SET_USED(free);
+#endif
+}
+
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 static void
 mempool_obj_audit(struct rte_mempool *mp, __rte_unused void *opaque,
@@ -1320,6 +1350,7 @@ void
 rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 {
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	struct rte_mempool_info info;
 	struct rte_mempool_debug_stats sum;
 	unsigned lcore_id;
 #endif
@@ -1361,6 +1392,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 
 	/* sum and dump statistics */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	rte_mempool_ops_get_info(mp, &info);
 	memset(&sum, 0, sizeof(sum));
 	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
 		sum.put_bulk += mp->stats[lcore_id].put_bulk;
@@ -1369,6 +1401,8 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 		sum.get_success_objs += mp->stats[lcore_id].get_success_objs;
 		sum.get_fail_bulk += mp->stats[lcore_id].get_fail_bulk;
 		sum.get_fail_objs += mp->stats[lcore_id].get_fail_objs;
+		sum.get_success_blks += mp->stats[lcore_id].get_success_blks;
+		sum.get_fail_blks += mp->stats[lcore_id].get_fail_blks;
 	}
 	fprintf(f, "  stats:\n");
 	fprintf(f, "    put_bulk=%"PRIu64"\n", sum.put_bulk);
@@ -1377,6 +1411,11 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 	fprintf(f, "    get_success_objs=%"PRIu64"\n", sum.get_success_objs);
 	fprintf(f, "    get_fail_bulk=%"PRIu64"\n", sum.get_fail_bulk);
 	fprintf(f, "    get_fail_objs=%"PRIu64"\n", sum.get_fail_objs);
+	if (info.contig_block_size > 0) {
+		fprintf(f, "    get_success_blks=%"PRIu64"\n",
+			sum.get_success_blks);
+		fprintf(f, "    get_fail_blks=%"PRIu64"\n", sum.get_fail_blks);
+	}
 #else
 	fprintf(f, "  no statistics available\n");
 #endif
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 853f2da4d..1f59553b3 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -70,6 +70,10 @@ struct rte_mempool_debug_stats {
 	uint64_t get_success_objs; /**< Objects successfully allocated. */
 	uint64_t get_fail_bulk;    /**< Failed allocation number. */
 	uint64_t get_fail_objs;    /**< Objects that failed to be allocated. */
+	/** Successful allocation number of contiguous blocks. */
+	uint64_t get_success_blks;
+	/** Failed allocation number of contiguous blocks. */
+	uint64_t get_fail_blks;
 } __rte_cache_aligned;
 #endif
 
@@ -199,11 +203,8 @@ struct rte_mempool_memhdr {
  * a number of cases when something small is added.
  */
 struct rte_mempool_info {
-	/*
-	 * Dummy structure member to make it non emtpy until the first
-	 * real member is added.
-	 */
-	unsigned int dummy;
+	/** Number of objects in the contiguous block */
+	unsigned int contig_block_size;
 } __rte_cache_aligned;
 
 /**
@@ -282,8 +283,16 @@ struct rte_mempool {
 			mp->stats[__lcore_id].name##_bulk += 1;	\
 		}                                               \
 	} while(0)
+#define __MEMPOOL_CONTIG_BLOCKS_STAT_ADD(mp, name, n) do {                    \
+		unsigned int __lcore_id = rte_lcore_id();       \
+		if (__lcore_id < RTE_MAX_LCORE) {               \
+			mp->stats[__lcore_id].name##_blks += n;	\
+			mp->stats[__lcore_id].name##_bulk += 1;	\
+		}                                               \
+	} while (0)
 #else
 #define __MEMPOOL_STAT_ADD(mp, name, n) do {} while(0)
+#define __MEMPOOL_CONTIG_BLOCKS_STAT_ADD(mp, name, n) do {} while (0)
 #endif
 
 /**
@@ -351,6 +360,38 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 #define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
 #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * @internal Check contiguous object blocks and update cookies or panic.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param first_obj_table_const
+ *   Pointer to a table of void * pointers (first object of the contiguous
+ *   object blocks).
+ * @param n
+ *   Number of contiguous object blocks.
+ * @param free
+ *   - 0: object is supposed to be allocated, mark it as free
+ *   - 1: object is supposed to be free, mark it as allocated
+ *   - 2: just check that cookie is valid (free or allocated)
+ */
+void rte_mempool_contig_blocks_check_cookies(const struct rte_mempool *mp,
+	void * const *first_obj_table_const, unsigned int n, int free);
+
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+#define __mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \
+					      free) \
+	rte_mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \
+						free)
+#else
+#define __mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \
+					      free) \
+	do {} while (0)
+#endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
+
 #define RTE_MEMPOOL_OPS_NAMESIZE 32 /**< Max length of ops struct name. */
 
 /**
@@ -382,6 +423,15 @@ typedef int (*rte_mempool_enqueue_t)(struct rte_mempool *mp,
 typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
 		void **obj_table, unsigned int n);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Dequeue a number of contiquous object blocks from the external pool.
+ */
+typedef int (*rte_mempool_dequeue_contig_blocks_t)(struct rte_mempool *mp,
+		 void **first_obj_table, unsigned int n);
+
 /**
  * Return the number of available objects in the external pool.
  */
@@ -548,6 +598,10 @@ struct rte_mempool_ops {
 	 * Get mempool info
 	 */
 	rte_mempool_get_info_t get_info;
+	/**
+	 * Dequeue a number of contiguous object blocks.
+	 */
+	rte_mempool_dequeue_contig_blocks_t dequeue_contig_blocks;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -625,6 +679,30 @@ rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
 	return ops->dequeue(mp, obj_table, n);
 }
 
+/**
+ * @internal Wrapper for mempool_ops dequeue_contig_blocks callback.
+ *
+ * @param[in] mp
+ *   Pointer to the memory pool.
+ * @param[out] first_obj_table
+ *   Pointer to a table of void * pointers (first objects).
+ * @param[in] n
+ *   Number of blocks to get.
+ * @return
+ *   - 0: Success; got n objects.
+ *   - <0: Error; code of dequeue function.
+ */
+static inline int
+rte_mempool_ops_dequeue_contig_blocks(struct rte_mempool *mp,
+		void **first_obj_table, unsigned int n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+	RTE_ASSERT(ops->dequeue_contig_blocks != NULL);
+	return ops->dequeue_contig_blocks(mp, first_obj_table, n);
+}
+
 /**
  * @internal wrapper for mempool_ops enqueue callback.
  *
@@ -1539,6 +1617,49 @@ rte_mempool_get(struct rte_mempool *mp, void **obj_p)
 	return rte_mempool_get_bulk(mp, obj_p, 1);
 }
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Get a contiguous blocks of objects from the mempool.
+ *
+ * If cache is enabled, consider to flush it first, to reuse objects
+ * as soon as possible.
+ *
+ * The application should check that the driver supports the operation
+ * by calling rte_mempool_ops_get_info() and checking that `contig_block_size`
+ * is not zero.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param first_obj_table
+ *   A pointer to a pointer to the first object in each block.
+ * @param n
+ *   The number of blocks to get from mempool.
+ * @return
+ *   - 0: Success; blocks taken.
+ *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
+ *   - -EOPNOTSUPP: The mempool driver does not support block dequeue
+ */
+static __rte_always_inline int
+__rte_experimental
+rte_mempool_get_contig_blocks(struct rte_mempool *mp,
+			      void **first_obj_table, unsigned int n)
+{
+	int ret;
+
+	ret = rte_mempool_ops_dequeue_contig_blocks(mp, first_obj_table, n);
+	if (ret == 0) {
+		__MEMPOOL_CONTIG_BLOCKS_STAT_ADD(mp, get_success, n);
+		__mempool_contig_blocks_check_cookies(mp, first_obj_table, n,
+						      1);
+	} else {
+		__MEMPOOL_CONTIG_BLOCKS_STAT_ADD(mp, get_fail, n);
+	}
+
+	return ret;
+}
+
 /**
  * Return the number of entries in the mempool.
  *
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index efc1c084c..a27e1fa51 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -60,6 +60,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->calc_mem_size = h->calc_mem_size;
 	ops->populate = h->populate;
 	ops->get_info = h->get_info;
+	ops->dequeue_contig_blocks = h->dequeue_contig_blocks;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index c9d16ecc4..1c406b5b0 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -53,6 +53,7 @@ DPDK_17.11 {
 DPDK_18.05 {
 	global:
 
+	rte_mempool_contig_blocks_check_cookies;
 	rte_mempool_op_calc_mem_size_default;
 	rte_mempool_op_populate_default;
 
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v4 4/5] mempool/bucket: implement block dequeue operation
  2018-04-26 10:59 ` [PATCH v4 0/5] mempool: add bucket driver Andrew Rybchenko
                     ` (2 preceding siblings ...)
  2018-04-26 10:59   ` [PATCH v4 3/5] mempool: support block dequeue operation Andrew Rybchenko
@ 2018-04-26 10:59   ` Andrew Rybchenko
  2018-04-26 10:59   ` [PATCH v4 5/5] mempool/bucket: do not allow one lcore to grab all buckets Andrew Rybchenko
  2018-04-26 21:35   ` [PATCH v4 0/5] mempool: add bucket driver Thomas Monjalon
  5 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-26 10:59 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 doc/guides/rel_notes/release_18_05.rst      |  2 ++
 drivers/mempool/bucket/rte_mempool_bucket.c | 52 +++++++++++++++++++++++++++++
 2 files changed, 54 insertions(+)

diff --git a/doc/guides/rel_notes/release_18_05.rst b/doc/guides/rel_notes/release_18_05.rst
index 3d56431cc..99f98c5ea 100644
--- a/doc/guides/rel_notes/release_18_05.rst
+++ b/doc/guides/rel_notes/release_18_05.rst
@@ -47,6 +47,8 @@ New Features
   block of objects.
   Number of objects in the block depends on how many objects fit in
   RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB memory chunk which is build time option.
+  The number may be obtained using rte_mempool_ops_get_info() API.
+  Contiguous blocks may be allocated using rte_mempool_get_contig_blocks() API.
 
 * **Added PMD-recommended Tx and Rx parameters**
 
diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c
index ef822eb2a..24be24e96 100644
--- a/drivers/mempool/bucket/rte_mempool_bucket.c
+++ b/drivers/mempool/bucket/rte_mempool_bucket.c
@@ -294,6 +294,46 @@ bucket_dequeue(struct rte_mempool *mp, void **obj_table, unsigned int n)
 	return rc;
 }
 
+static int
+bucket_dequeue_contig_blocks(struct rte_mempool *mp, void **first_obj_table,
+			     unsigned int n)
+{
+	struct bucket_data *bd = mp->pool_data;
+	const uint32_t header_size = bd->header_size;
+	struct bucket_stack *cur_stack = bd->buckets[rte_lcore_id()];
+	unsigned int n_buckets_from_stack = RTE_MIN(n, cur_stack->top);
+	struct bucket_header *hdr;
+	void **first_objp = first_obj_table;
+
+	bucket_adopt_orphans(bd);
+
+	n -= n_buckets_from_stack;
+	while (n_buckets_from_stack-- > 0) {
+		hdr = bucket_stack_pop_unsafe(cur_stack);
+		*first_objp++ = (uint8_t *)hdr + header_size;
+	}
+	if (n > 0) {
+		if (unlikely(rte_ring_dequeue_bulk(bd->shared_bucket_ring,
+						   first_objp, n, NULL) != n)) {
+			/* Return the already dequeued buckets */
+			while (first_objp-- != first_obj_table) {
+				bucket_stack_push(cur_stack,
+						  (uint8_t *)*first_objp -
+						  header_size);
+			}
+			rte_errno = ENOBUFS;
+			return -rte_errno;
+		}
+		while (n-- > 0) {
+			hdr = (struct bucket_header *)*first_objp;
+			hdr->lcore_id = rte_lcore_id();
+			*first_objp++ = (uint8_t *)hdr + header_size;
+		}
+	}
+
+	return 0;
+}
+
 static void
 count_underfilled_buckets(struct rte_mempool *mp,
 			  void *opaque,
@@ -548,6 +588,16 @@ bucket_populate(struct rte_mempool *mp, unsigned int max_objs,
 	return n_objs;
 }
 
+static int
+bucket_get_info(const struct rte_mempool *mp, struct rte_mempool_info *info)
+{
+	struct bucket_data *bd = mp->pool_data;
+
+	info->contig_block_size = bd->obj_per_bucket;
+	return 0;
+}
+
+
 static const struct rte_mempool_ops ops_bucket = {
 	.name = "bucket",
 	.alloc = bucket_alloc,
@@ -557,6 +607,8 @@ static const struct rte_mempool_ops ops_bucket = {
 	.get_count = bucket_get_count,
 	.calc_mem_size = bucket_calc_mem_size,
 	.populate = bucket_populate,
+	.get_info = bucket_get_info,
+	.dequeue_contig_blocks = bucket_dequeue_contig_blocks,
 };
 
 
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* [PATCH v4 5/5] mempool/bucket: do not allow one lcore to grab all buckets
  2018-04-26 10:59 ` [PATCH v4 0/5] mempool: add bucket driver Andrew Rybchenko
                     ` (3 preceding siblings ...)
  2018-04-26 10:59   ` [PATCH v4 4/5] mempool/bucket: implement " Andrew Rybchenko
@ 2018-04-26 10:59   ` Andrew Rybchenko
  2018-04-26 21:35   ` [PATCH v4 0/5] mempool: add bucket driver Thomas Monjalon
  5 siblings, 0 replies; 197+ messages in thread
From: Andrew Rybchenko @ 2018-04-26 10:59 UTC (permalink / raw)
  To: dev; +Cc: Olivier MATZ, Artem V. Andreev

From: "Artem V. Andreev" <Artem.Andreev@oktetlabs.ru>

Signed-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/mempool/bucket/rte_mempool_bucket.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c
index 24be24e96..78d2b9d04 100644
--- a/drivers/mempool/bucket/rte_mempool_bucket.c
+++ b/drivers/mempool/bucket/rte_mempool_bucket.c
@@ -42,6 +42,7 @@ struct bucket_data {
 	unsigned int header_size;
 	unsigned int total_elt_size;
 	unsigned int obj_per_bucket;
+	unsigned int bucket_stack_thresh;
 	uintptr_t bucket_page_mask;
 	struct rte_ring *shared_bucket_ring;
 	struct bucket_stack *buckets[RTE_MAX_LCORE];
@@ -139,6 +140,7 @@ bucket_enqueue(struct rte_mempool *mp, void * const *obj_table,
 	       unsigned int n)
 {
 	struct bucket_data *bd = mp->pool_data;
+	struct bucket_stack *local_stack = bd->buckets[rte_lcore_id()];
 	unsigned int i;
 	int rc = 0;
 
@@ -146,6 +148,15 @@ bucket_enqueue(struct rte_mempool *mp, void * const *obj_table,
 		rc = bucket_enqueue_single(bd, obj_table[i]);
 		RTE_ASSERT(rc == 0);
 	}
+	if (local_stack->top > bd->bucket_stack_thresh) {
+		rte_ring_enqueue_bulk(bd->shared_bucket_ring,
+				      &local_stack->objects
+				      [bd->bucket_stack_thresh],
+				      local_stack->top -
+				      bd->bucket_stack_thresh,
+				      NULL);
+	    local_stack->top = bd->bucket_stack_thresh;
+	}
 	return rc;
 }
 
@@ -409,6 +420,8 @@ bucket_alloc(struct rte_mempool *mp)
 	bd->obj_per_bucket = (bd->bucket_mem_size - bucket_header_size) /
 		bd->total_elt_size;
 	bd->bucket_page_mask = ~(rte_align64pow2(bd->bucket_mem_size) - 1);
+	/* eventually this should be a tunable parameter */
+	bd->bucket_stack_thresh = (mp->size / bd->obj_per_bucket) * 4 / 3;
 
 	if (mp->flags & MEMPOOL_F_SP_PUT)
 		rg_flags |= RING_F_SP_ENQ;
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 197+ messages in thread

* Re: [PATCH v4 0/5] mempool: add bucket driver
  2018-04-26 10:59 ` [PATCH v4 0/5] mempool: add bucket driver Andrew Rybchenko
                     ` (4 preceding siblings ...)
  2018-04-26 10:59   ` [PATCH v4 5/5] mempool/bucket: do not allow one lcore to grab all buckets Andrew Rybchenko
@ 2018-04-26 21:35   ` Thomas Monjalon
  5 siblings, 0 replies; 197+ messages in thread
From: Thomas Monjalon @ 2018-04-26 21:35 UTC (permalink / raw)
  To: Andrew Rybchenko, Artem Andreev; +Cc: dev, Olivier MATZ

> Artem V. Andreev (5):
>   mempool/bucket: implement bucket mempool manager
>   mempool: implement abstract mempool info API
>   mempool: support block dequeue operation
>   mempool/bucket: implement block dequeue operation
>   mempool/bucket: do not allow one lcore to grab all buckets

Applied, thanks

^ permalink raw reply	[flat|nested] 197+ messages in thread

end of thread, other threads:[~2018-04-26 21:35 UTC | newest]

Thread overview: 197+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-24 16:06 [RFC PATCH 0/6] mempool: add bucket mempool driver Andrew Rybchenko
2017-11-24 16:06 ` [RFC PATCH 1/6] mempool: implement abstract mempool info API Andrew Rybchenko
2017-12-14 13:36   ` Olivier MATZ
2018-01-17 15:03     ` Andrew Rybchenko
2017-11-24 16:06 ` [RFC PATCH 2/6] mempool: implement clustered object allocation Andrew Rybchenko
2017-12-14 13:37   ` Olivier MATZ
2018-01-17 15:03     ` Andrew Rybchenko
2018-01-17 15:55       ` santosh
2018-01-17 16:37         ` Andrew Rybchenko
2017-11-24 16:06 ` [RFC PATCH 3/6] mempool/bucket: implement bucket mempool manager Andrew Rybchenko
2017-12-14 13:38   ` Olivier MATZ
2018-01-17 15:06     ` Andrew Rybchenko
2017-11-24 16:06 ` [RFC PATCH 4/6] mempool: add a function to flush default cache Andrew Rybchenko
2017-12-14 13:38   ` Olivier MATZ
2018-01-17 15:07     ` Andrew Rybchenko
2017-11-24 16:06 ` [RFC PATCH 5/6] mempool: support block dequeue operation Andrew Rybchenko
2017-12-14 13:38   ` Olivier MATZ
2017-11-24 16:06 ` [RFC PATCH 6/6] mempool/bucket: implement " Andrew Rybchenko
2017-12-14 13:36 ` [RFC PATCH 0/6] mempool: add bucket mempool driver Olivier MATZ
2018-01-17 15:03   ` Andrew Rybchenko
2018-01-23 13:15 ` [RFC v2 00/17] " Andrew Rybchenko
2018-01-23 13:15   ` [RFC v2 01/17] mempool: fix phys contig check if populate default skipped Andrew Rybchenko
2018-01-31 16:45     ` Olivier Matz
2018-02-01  5:05       ` santosh
2018-02-01  6:54         ` Andrew Rybchenko
2018-02-01  9:09           ` santosh
2018-02-01  9:18             ` Andrew Rybchenko
2018-02-01  9:30               ` santosh
2018-02-01 10:00                 ` Andrew Rybchenko
2018-02-01 10:14                   ` Olivier Matz
2018-02-01 10:33                     ` santosh
2018-02-01 14:02                       ` Andrew Rybchenko
2018-02-01 10:17                   ` santosh
2018-02-01 14:02     ` [PATCH] " Andrew Rybchenko
2018-02-05 23:53       ` [dpdk-stable] " Thomas Monjalon
2018-01-23 13:15   ` [RFC v2 02/17] mempool: add op to calculate memory size to be allocated Andrew Rybchenko
2018-01-31 16:45     ` Olivier Matz
2018-02-01  7:15       ` Andrew Rybchenko
2018-01-23 13:15   ` [RFC v2 03/17] mempool/octeontx: add callback to calculate memory size Andrew Rybchenko
     [not found]     ` <BN3PR07MB2513732462EB5FE5E1B05713E3FA0@BN3PR07MB2513.namprd07.prod.outlook.com>
2018-02-01 10:01       ` santosh
2018-02-01 13:40         ` santosh
2018-03-10 15:49           ` Andrew Rybchenko
2018-03-11  6:31             ` santosh
2018-01-23 13:15   ` [RFC v2 04/17] mempool: add op to populate objects using provided memory Andrew Rybchenko
2018-01-31 16:45     ` Olivier Matz
2018-02-01  8:51       ` Andrew Rybchenko
2018-01-23 13:16   ` [RFC v2 05/17] mempool/octeontx: implement callback to populate objects Andrew Rybchenko
2018-01-23 13:16   ` [RFC v2 06/17] mempool: remove callback to get capabilities Andrew Rybchenko
2018-01-23 13:16   ` [RFC v2 07/17] mempool: deprecate xmem functions Andrew Rybchenko
2018-01-23 13:16   ` [RFC v2 08/17] mempool/octeontx: prepare to remove register memory area op Andrew Rybchenko
2018-01-23 13:16   ` [RFC v2 09/17] mempool/dpaa: convert to use populate driver op Andrew Rybchenko
2018-01-23 13:16   ` [RFC v2 10/17] mempool: remove callback to register memory area Andrew Rybchenko
2018-01-23 13:16   ` [RFC v2 11/17] mempool: ensure the mempool is initialized before populating Andrew Rybchenko
2018-01-31 16:45     ` Olivier Matz
2018-02-01  8:53       ` Andrew Rybchenko
2018-01-23 13:16   ` [RFC v2 12/17] mempool/bucket: implement bucket mempool manager Andrew Rybchenko
2018-01-23 13:16   ` [RFC v2 13/17] mempool: support flushing the default cache of the mempool Andrew Rybchenko
2018-01-23 13:16   ` [RFC v2 14/17] mempool: implement abstract mempool info API Andrew Rybchenko
2018-01-23 13:16   ` [RFC v2 15/17] mempool: support block dequeue operation Andrew Rybchenko
2018-01-23 13:16   ` [RFC v2 16/17] mempool/bucket: implement " Andrew Rybchenko
2018-01-23 13:16   ` [RFC v2 17/17] mempool/bucket: do not allow one lcore to grab all buckets Andrew Rybchenko
2018-01-31 16:44   ` [RFC v2 00/17] mempool: add bucket mempool driver Olivier Matz
2018-03-10 15:39   ` [PATCH v1 0/9] mempool: prepare to add bucket driver Andrew Rybchenko
2018-03-10 15:39     ` [PATCH v1 1/9] mempool: add op to calculate memory size to be allocated Andrew Rybchenko
2018-03-11 12:51       ` santosh
2018-03-12  6:53         ` Andrew Rybchenko
2018-03-19 17:03       ` Olivier Matz
2018-03-20 10:29         ` Andrew Rybchenko
2018-03-20 14:41         ` Bruce Richardson
2018-03-10 15:39     ` [PATCH v1 2/9] mempool: add op to populate objects using provided memory Andrew Rybchenko
2018-03-19 17:04       ` Olivier Matz
2018-03-21  7:05         ` Andrew Rybchenko
2018-03-10 15:39     ` [PATCH v1 3/9] mempool: remove callback to get capabilities Andrew Rybchenko
2018-03-14 14:40       ` Burakov, Anatoly
2018-03-14 16:12         ` Andrew Rybchenko
2018-03-14 16:53           ` Burakov, Anatoly
2018-03-14 17:24             ` Andrew Rybchenko
2018-03-15  9:48               ` Burakov, Anatoly
2018-03-15 11:49                 ` Andrew Rybchenko
2018-03-15 12:00                   ` Burakov, Anatoly
2018-03-15 12:44                     ` Andrew Rybchenko
2018-03-19 17:05                       ` Olivier Matz
2018-03-19 17:06       ` Olivier Matz
2018-03-10 15:39     ` [PATCH v1 4/9] mempool: deprecate xmem functions Andrew Rybchenko
2018-03-10 15:39     ` [PATCH v1 5/9] mempool/octeontx: prepare to remove register memory area op Andrew Rybchenko
2018-03-10 15:39     ` [PATCH v1 6/9] mempool/dpaa: " Andrew Rybchenko
2018-03-10 15:39     ` [PATCH v1 7/9] mempool: remove callback to register memory area Andrew Rybchenko
2018-03-10 15:39     ` [PATCH v1 8/9] mempool: ensure the mempool is initialized before populating Andrew Rybchenko
2018-03-19 17:06       ` Olivier Matz
2018-03-20 13:32         ` Andrew Rybchenko
2018-03-20 16:57           ` Olivier Matz
2018-03-10 15:39     ` [PATCH v1 9/9] mempool: support flushing the default cache of the mempool Andrew Rybchenko
2018-03-14 15:49     ` [PATCH v1 0/9] mempool: prepare to add bucket driver santosh
2018-03-14 15:57       ` Andrew Rybchenko
2018-03-19 17:03     ` Olivier Matz
2018-03-20 10:09       ` Andrew Rybchenko
2018-03-20 11:04         ` Thomas Monjalon
2018-03-25 16:20   ` [PATCH v2 00/11] " Andrew Rybchenko
2018-03-25 16:20     ` [PATCH v2 01/11] mempool: fix memhdr leak when no objects are populated Andrew Rybchenko
2018-03-25 16:20     ` [PATCH v2 02/11] mempool: rename flag to control IOVA-contiguous objects Andrew Rybchenko
2018-03-25 16:20     ` [PATCH v2 03/11] mempool: ensure the mempool is initialized before populating Andrew Rybchenko
2018-03-25 16:20     ` [PATCH v2 04/11] mempool: add op to calculate memory size to be allocated Andrew Rybchenko
2018-03-25 16:20     ` [PATCH v2 05/11] mempool: add op to populate objects using provided memory Andrew Rybchenko
2018-03-25 16:20     ` [PATCH v2 06/11] mempool: remove callback to get capabilities Andrew Rybchenko
2018-03-25 16:20     ` [PATCH v2 07/11] mempool: deprecate xmem functions Andrew Rybchenko
2018-03-25 16:20     ` [PATCH v2 08/11] mempool/octeontx: prepare to remove register memory area op Andrew Rybchenko
2018-03-25 16:20     ` [PATCH v2 09/11] mempool/dpaa: " Andrew Rybchenko
2018-03-26  7:13       ` Andrew Rybchenko
2018-03-25 16:20     ` [PATCH v2 10/11] mempool: remove callback to register memory area Andrew Rybchenko
2018-03-25 16:20     ` [PATCH v2 11/11] mempool: support flushing the default cache of the mempool Andrew Rybchenko
2018-03-26 16:09   ` [PATCH v3 00/11] mempool: prepare to add bucket driver Andrew Rybchenko
2018-03-26 16:09     ` [PATCH v3 01/11] mempool: fix memhdr leak when no objects are populated Andrew Rybchenko
2018-04-06 15:50       ` Olivier Matz
2018-03-26 16:09     ` [PATCH v3 02/11] mempool: rename flag to control IOVA-contiguous objects Andrew Rybchenko
2018-04-06 15:50       ` Olivier Matz
2018-03-26 16:09     ` [PATCH v3 03/11] mempool: ensure the mempool is initialized before populating Andrew Rybchenko
2018-04-04 15:06       ` santosh
2018-04-06 15:50       ` Olivier Matz
2018-03-26 16:09     ` [PATCH v3 04/11] mempool: add op to calculate memory size to be allocated Andrew Rybchenko
2018-04-04 15:08       ` santosh
2018-04-06 15:51       ` Olivier Matz
2018-04-12 15:22       ` Burakov, Anatoly
2018-03-26 16:09     ` [PATCH v3 05/11] mempool: add op to populate objects using provided memory Andrew Rybchenko
2018-04-04 15:09       ` santosh
2018-04-06 15:51       ` Olivier Matz
2018-03-26 16:09     ` [PATCH v3 06/11] mempool: remove callback to get capabilities Andrew Rybchenko
2018-04-04 15:10       ` santosh
2018-04-06 15:51       ` Olivier Matz
2018-03-26 16:09     ` [PATCH v3 07/11] mempool: deprecate xmem functions Andrew Rybchenko
2018-04-06 15:52       ` Olivier Matz
2018-03-26 16:09     ` [PATCH v3 08/11] mempool/octeontx: prepare to remove register memory area op Andrew Rybchenko
2018-04-04 15:12       ` santosh
2018-03-26 16:09     ` [PATCH v3 09/11] mempool/dpaa: " Andrew Rybchenko
2018-04-05  8:25       ` Hemant Agrawal
2018-03-26 16:09     ` [PATCH v3 10/11] mempool: remove callback to register memory area Andrew Rybchenko
2018-04-04 15:13       ` santosh
2018-04-06 15:52       ` Olivier Matz
2018-03-26 16:09     ` [PATCH v3 11/11] mempool: support flushing the default cache of the mempool Andrew Rybchenko
2018-04-06 15:53       ` Olivier Matz
2018-03-26 16:12   ` [PATCH v1 0/6] mempool: add bucket driver Andrew Rybchenko
2018-03-26 16:12     ` [PATCH v1 1/6] mempool/bucket: implement bucket mempool manager Andrew Rybchenko
2018-03-26 16:12     ` [PATCH v1 2/6] mempool: implement abstract mempool info API Andrew Rybchenko
2018-04-19 16:42       ` Olivier Matz
2018-04-25  9:57         ` Andrew Rybchenko
2018-04-25 10:26           ` Olivier Matz
2018-03-26 16:12     ` [PATCH v1 3/6] mempool: support block dequeue operation Andrew Rybchenko
2018-04-19 16:41       ` Olivier Matz
2018-04-25  9:49         ` Andrew Rybchenko
2018-03-26 16:12     ` [PATCH v1 4/6] mempool/bucket: implement " Andrew Rybchenko
2018-03-26 16:12     ` [PATCH v1 5/6] mempool/bucket: do not allow one lcore to grab all buckets Andrew Rybchenko
2018-03-26 16:12     ` [PATCH v1 6/6] doc: advertise bucket mempool driver Andrew Rybchenko
2018-04-19 16:43       ` Olivier Matz
2018-04-19 16:41     ` [PATCH v1 0/6] mempool: add bucket driver Olivier Matz
2018-04-16 13:24 ` [PATCH v4 00/11] mempool: prepare to " Andrew Rybchenko
2018-04-16 13:24   ` [PATCH v4 01/11] mempool: fix memhdr leak when no objects are populated Andrew Rybchenko
2018-04-16 13:24   ` [PATCH v4 02/11] mempool: rename flag to control IOVA-contiguous objects Andrew Rybchenko
2018-04-16 13:24   ` [PATCH v4 03/11] mempool: ensure the mempool is initialized before populating Andrew Rybchenko
2018-04-16 13:24   ` [PATCH v4 04/11] mempool: add op to calculate memory size to be allocated Andrew Rybchenko
2018-04-16 15:33     ` Olivier Matz
2018-04-16 15:41       ` Andrew Rybchenko
2018-04-17 10:23     ` Burakov, Anatoly
2018-04-16 13:24   ` [PATCH v4 05/11] mempool: add op to populate objects using provided memory Andrew Rybchenko
2018-04-16 13:24   ` [PATCH v4 06/11] mempool: remove callback to get capabilities Andrew Rybchenko
2018-04-16 13:24   ` [PATCH v4 07/11] mempool: deprecate xmem functions Andrew Rybchenko
2018-04-16 13:24   ` [PATCH v4 08/11] mempool/octeontx: prepare to remove register memory area op Andrew Rybchenko
2018-04-16 13:24   ` [PATCH v4 09/11] mempool/dpaa: " Andrew Rybchenko
2018-04-16 13:24   ` [PATCH v4 10/11] mempool: remove callback to register memory area Andrew Rybchenko
2018-04-16 13:24   ` [PATCH v4 11/11] mempool: support flushing the default cache of the mempool Andrew Rybchenko
2018-04-24  0:20   ` [PATCH v4 00/11] mempool: prepare to add bucket driver Thomas Monjalon
2018-04-16 13:33 ` [PATCH v2 0/6] mempool: " Andrew Rybchenko
2018-04-16 13:33   ` [PATCH v2 1/6] mempool/bucket: implement bucket mempool manager Andrew Rybchenko
2018-04-16 13:33   ` [PATCH v2 2/6] mempool: implement abstract mempool info API Andrew Rybchenko
2018-04-25  8:44     ` Olivier Matz
2018-04-16 13:33   ` [PATCH v2 3/6] mempool: support block dequeue operation Andrew Rybchenko
2018-04-25  8:45     ` Olivier Matz
2018-04-16 13:33   ` [PATCH v2 4/6] mempool/bucket: implement " Andrew Rybchenko
2018-04-16 13:33   ` [PATCH v2 5/6] mempool/bucket: do not allow one lcore to grab all buckets Andrew Rybchenko
2018-04-16 13:33   ` [PATCH v2 6/6] doc: advertise bucket mempool driver Andrew Rybchenko
2018-04-24 23:00   ` [PATCH v2 0/6] mempool: add bucket driver Thomas Monjalon
2018-04-25  8:43     ` Olivier Matz
2018-04-25 16:32 ` [PATCH v3 " Andrew Rybchenko
2018-04-25 16:32   ` [PATCH v3 1/6] mempool/bucket: implement bucket mempool manager Andrew Rybchenko
2018-04-25 16:32   ` [PATCH v3 2/6] mempool: implement abstract mempool info API Andrew Rybchenko
2018-04-25 16:32   ` [PATCH v3 3/6] mempool: support block dequeue operation Andrew Rybchenko
2018-04-25 16:32   ` [PATCH v3 4/6] mempool/bucket: implement " Andrew Rybchenko
2018-04-25 16:32   ` [PATCH v3 5/6] mempool/bucket: do not allow one lcore to grab all buckets Andrew Rybchenko
2018-04-25 16:32   ` [PATCH v3 6/6] doc: advertise bucket mempool driver Andrew Rybchenko
2018-04-25 21:56     ` Thomas Monjalon
2018-04-25 22:04       ` Thomas Monjalon
2018-04-26  9:50         ` Andrew Rybchenko
2018-04-26 10:59 ` [PATCH v4 0/5] mempool: add bucket driver Andrew Rybchenko
2018-04-26 10:59   ` [PATCH v4 1/5] mempool/bucket: implement bucket mempool manager Andrew Rybchenko
2018-04-26 10:59   ` [PATCH v4 2/5] mempool: implement abstract mempool info API Andrew Rybchenko
2018-04-26 10:59   ` [PATCH v4 3/5] mempool: support block dequeue operation Andrew Rybchenko
2018-04-26 10:59   ` [PATCH v4 4/5] mempool/bucket: implement " Andrew Rybchenko
2018-04-26 10:59   ` [PATCH v4 5/5] mempool/bucket: do not allow one lcore to grab all buckets Andrew Rybchenko
2018-04-26 21:35   ` [PATCH v4 0/5] mempool: add bucket driver Thomas Monjalon

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.