All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] add external mempool manager
@ 2016-01-26 17:25 David Hunt
  2016-01-26 17:25 ` [PATCH 1/5] mempool: add external mempool manager support David Hunt
                   ` (6 more replies)
  0 siblings, 7 replies; 238+ messages in thread
From: David Hunt @ 2016-01-26 17:25 UTC (permalink / raw)
  To: dev

Hi all on the list.

Here's a proposed patch for an external mempool manager

The External Mempool Manager is an extension to the mempool API that allows
users to add and use an external mempool manager, which allows external memory
subsystems such as external hardware memory management systems and software
based memory allocators to be used with DPDK.

The existing API to the internal DPDK mempool manager will remain unchanged
and will be backward compatible.

There are two aspects to external mempool manager.
  1. Adding the code for your new mempool handler. This is achieved by adding a
     new mempool handler source file into the librte_mempool library, and
     using the REGISTER_MEMPOOL_HANDLER macro.
  2. Using the new API to call rte_mempool_create_ext to create a new mempool
     using the name parameter to identify which handler to use.

New API calls added
 1. A new mempool 'create' function which accepts mempool handler name.
 2. A new mempool 'rte_get_mempool_handler' function which accepts mempool
    handler name, and returns the index to the relevant set of callbacks for
    that mempool handler

Several external mempool managers may be used in the same application. A new
mempool can then be created by using the new 'create' function, providing the
mempool handler name to point the mempool to the relevant mempool manager
callback structure.

The old 'create' function can still be called by legacy programs, and will
internally work out the mempool handle based on the flags provided (single
producer, single consumer, etc). By default handles are created internally to
implement the built-in DPDK mempool manager and mempool types.

The external mempool manager needs to provide the following functions.
 1. alloc     - allocates the mempool memory, and adds each object onto a ring
 2. put       - puts an object back into the mempool once an application has
                finished with it
 3. get       - gets an object from the mempool for use by the application
 4. get_count - gets the number of available objects in the mempool
 5. free      - frees the mempool memory

Every time a get/put/get_count is called from the application/PMD, the
callback for that mempool is called. These functions are in the fastpath,
and any unoptimised handlers may limit performance.

The new APIs are as follows:

1. rte_mempool_create_ext

struct rte_mempool *
rte_mempool_create_ext(const char * name, unsigned n,
        unsigned cache_size, unsigned private_data_size,
        int socket_id, unsigned flags,
        const char * handler_name);

2. rte_get_mempool_handler

int16_t
rte_get_mempool_handler(const char *name);

Please see rte_mempool.h for further information on the parameters.


The important thing to note is that the mempool handler is passed by name
to rte_mempool_create_ext, and that in turn calls rte_get_mempool_handler to
get the handler index, which is stored in the rte_memool structure. This
allow multiple processes to use the same mempool, as the function pointers
are accessed via handler index.

The mempool handler structure contains callbacks to the implementation of
the handler, and is set up for registration as follows:

static struct rte_mempool_handler handler_sp_mc = {
    .name = "ring_sp_mc",
    .alloc = rte_mempool_common_ring_alloc,
    .put = common_ring_sp_put,
    .get = common_ring_mc_get,
    .get_count = common_ring_get_count,
    .free = common_ring_free,
};

And then the following macro will register the handler in the array of handlers

REGISTER_MEMPOOL_HANDLER(handler_mp_mc);

For and example of a simple malloc based mempool manager, see
lib/librte_mempool/custom_mempool.c

For an example of API usage, please see app/test/test_ext_mempool.c, which
implements a rudimentary mempool manager using simple mallocs for each
mempool object (custom_mempool.c).


David Hunt (5):
  mempool: add external mempool manager support
  memool: add stack (lifo) based external mempool handler
  mempool: add custom external mempool handler example
  mempool: add autotest for external mempool custom example
  mempool: allow rte_pktmbuf_pool_create switch between memool handlers

 app/test/Makefile                         |   1 +
 app/test/test_ext_mempool.c               | 470 ++++++++++++++++++++++++++++++
 app/test/test_mempool_perf.c              |   2 -
 lib/librte_mbuf/rte_mbuf.c                |  11 +
 lib/librte_mempool/Makefile               |   3 +
 lib/librte_mempool/custom_mempool.c       | 158 ++++++++++
 lib/librte_mempool/rte_mempool.c          | 208 +++++++++----
 lib/librte_mempool/rte_mempool.h          | 205 +++++++++++--
 lib/librte_mempool/rte_mempool_default.c  | 229 +++++++++++++++
 lib/librte_mempool/rte_mempool_internal.h |  70 +++++
 lib/librte_mempool/rte_mempool_stack.c    | 162 ++++++++++
 11 files changed, 1430 insertions(+), 89 deletions(-)
 create mode 100644 app/test/test_ext_mempool.c
 create mode 100644 lib/librte_mempool/custom_mempool.c
 create mode 100644 lib/librte_mempool/rte_mempool_default.c
 create mode 100644 lib/librte_mempool/rte_mempool_internal.h
 create mode 100644 lib/librte_mempool/rte_mempool_stack.c

-- 
1.9.3

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH 1/5] mempool: add external mempool manager support
  2016-01-26 17:25 [PATCH 0/5] add external mempool manager David Hunt
@ 2016-01-26 17:25 ` David Hunt
  2016-01-28 17:52   ` Jerin Jacob
  2016-02-04 14:52   ` Olivier MATZ
  2016-01-26 17:25 ` [PATCH 2/5] memool: add stack (lifo) based external mempool handler David Hunt
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 238+ messages in thread
From: David Hunt @ 2016-01-26 17:25 UTC (permalink / raw)
  To: dev

Adds the new rte_mempool_create_ext api and callback mechanism for
external mempool handlers

Modifies the existing rte_mempool_create to set up the handler_idx to
the relevant mempool handler based on the handler name:
	ring_sp_sc
	ring_mp_mc
	ring_sp_mc
	ring_mp_sc

Signed-off-by: David Hunt <david.hunt@intel.com>
---
 app/test/test_mempool_perf.c              |   1 -
 lib/librte_mempool/Makefile               |   1 +
 lib/librte_mempool/rte_mempool.c          | 210 +++++++++++++++++++--------
 lib/librte_mempool/rte_mempool.h          | 207 +++++++++++++++++++++++----
 lib/librte_mempool/rte_mempool_default.c  | 229 ++++++++++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_internal.h |  74 ++++++++++
 6 files changed, 634 insertions(+), 88 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_default.c
 create mode 100644 lib/librte_mempool/rte_mempool_internal.h

diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index cdc02a0..091c1df 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
 							   n_get_bulk);
 				if (unlikely(ret < 0)) {
 					rte_mempool_dump(stdout, mp);
-					rte_ring_dump(stdout, mp->ring);
 					/* in this case, objects are lost... */
 					return -1;
 				}
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index a6898ef..7c81ef6 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -42,6 +42,7 @@ LIBABIVER := 1
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
 ifeq ($(CONFIG_RTE_LIBRTE_XEN_DOM0),y)
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_dom0_mempool.c
 endif
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index aff5f6d..8c01838 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -59,10 +59,11 @@
 #include <rte_spinlock.h>
 
 #include "rte_mempool.h"
+#include "rte_mempool_internal.h"
 
 TAILQ_HEAD(rte_mempool_list, rte_tailq_entry);
 
-static struct rte_tailq_elem rte_mempool_tailq = {
+struct rte_tailq_elem rte_mempool_tailq = {
 	.name = "RTE_MEMPOOL",
 };
 EAL_REGISTER_TAILQ(rte_mempool_tailq)
@@ -149,7 +150,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, uint32_t obj_idx,
 		obj_init(mp, obj_init_arg, obj, obj_idx);
 
 	/* enqueue in ring */
-	rte_ring_sp_enqueue(mp->ring, obj);
+	rte_mempool_ext_put_bulk(mp, &obj, 1);
 }
 
 uint32_t
@@ -375,48 +376,28 @@ rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz,
 	return usz;
 }
 
-#ifndef RTE_LIBRTE_XEN_DOM0
-/* stub if DOM0 support not configured */
-struct rte_mempool *
-rte_dom0_mempool_create(const char *name __rte_unused,
-			unsigned n __rte_unused,
-			unsigned elt_size __rte_unused,
-			unsigned cache_size __rte_unused,
-			unsigned private_data_size __rte_unused,
-			rte_mempool_ctor_t *mp_init __rte_unused,
-			void *mp_init_arg __rte_unused,
-			rte_mempool_obj_ctor_t *obj_init __rte_unused,
-			void *obj_init_arg __rte_unused,
-			int socket_id __rte_unused,
-			unsigned flags __rte_unused)
-{
-	rte_errno = EINVAL;
-	return NULL;
-}
-#endif
-
 /* create the mempool */
 struct rte_mempool *
 rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
-		   unsigned cache_size, unsigned private_data_size,
-		   rte_mempool_ctor_t *mp_init, void *mp_init_arg,
-		   rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
-		   int socket_id, unsigned flags)
+			unsigned cache_size, unsigned private_data_size,
+			rte_mempool_ctor_t *mp_init, void *mp_init_arg,
+			rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
+			int socket_id, unsigned flags)
 {
 	if (rte_xen_dom0_supported())
 		return rte_dom0_mempool_create(name, n, elt_size,
-					       cache_size, private_data_size,
-					       mp_init, mp_init_arg,
-					       obj_init, obj_init_arg,
-					       socket_id, flags);
+			cache_size, private_data_size,
+			mp_init, mp_init_arg,
+			obj_init, obj_init_arg,
+			socket_id, flags);
 	else
 		return rte_mempool_xmem_create(name, n, elt_size,
-					       cache_size, private_data_size,
-					       mp_init, mp_init_arg,
-					       obj_init, obj_init_arg,
-					       socket_id, flags,
-					       NULL, NULL, MEMPOOL_PG_NUM_DEFAULT,
-					       MEMPOOL_PG_SHIFT_MAX);
+			cache_size, private_data_size,
+			mp_init, mp_init_arg,
+			obj_init, obj_init_arg,
+			socket_id, flags,
+			NULL, NULL,
+			MEMPOOL_PG_NUM_DEFAULT, MEMPOOL_PG_SHIFT_MAX);
 }
 
 /*
@@ -435,11 +416,9 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 		const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift)
 {
 	char mz_name[RTE_MEMZONE_NAMESIZE];
-	char rg_name[RTE_RING_NAMESIZE];
 	struct rte_mempool_list *mempool_list;
 	struct rte_mempool *mp = NULL;
 	struct rte_tailq_entry *te;
-	struct rte_ring *r;
 	const struct rte_memzone *mz;
 	size_t mempool_size;
 	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
@@ -469,7 +448,7 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 
 	/* asked cache too big */
 	if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE ||
-	    CALC_CACHE_FLUSHTHRESH(cache_size) > n) {
+		CALC_CACHE_FLUSHTHRESH(cache_size) > n) {
 		rte_errno = EINVAL;
 		return NULL;
 	}
@@ -502,16 +481,8 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 		return NULL;
 	}
 
-	rte_rwlock_write_lock(RTE_EAL_MEMPOOL_RWLOCK);
 
-	/* allocate the ring that will be used to store objects */
-	/* Ring functions will return appropriate errors if we are
-	 * running as a secondary process etc., so no checks made
-	 * in this function for that condition */
-	snprintf(rg_name, sizeof(rg_name), RTE_MEMPOOL_MZ_FORMAT, name);
-	r = rte_ring_create(rg_name, rte_align32pow2(n+1), socket_id, rg_flags);
-	if (r == NULL)
-		goto exit;
+	rte_rwlock_write_lock(RTE_EAL_MEMPOOL_RWLOCK);
 
 	/*
 	 * reserve a memory zone for this mempool: private data is
@@ -588,7 +559,6 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 	memset(mp, 0, sizeof(*mp));
 	snprintf(mp->name, sizeof(mp->name), "%s", name);
 	mp->phys_addr = mz->phys_addr;
-	mp->ring = r;
 	mp->size = n;
 	mp->flags = flags;
 	mp->elt_size = objsz.elt_size;
@@ -598,6 +568,22 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 	mp->cache_flushthresh = CALC_CACHE_FLUSHTHRESH(cache_size);
 	mp->private_data_size = private_data_size;
 
+	/*
+	 * Since we have 4 combinations of the SP/SC/MP/MC, and stack,
+	 * examine the
+	 * flags to set the correct index into the handler table.
+	 */
+	if (flags & MEMPOOL_F_USE_STACK)
+		mp->handler_idx = rte_get_mempool_handler("stack");
+	else if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
+		mp->handler_idx = rte_get_mempool_handler("ring_sp_sc");
+	else if (flags & MEMPOOL_F_SP_PUT)
+		mp->handler_idx = rte_get_mempool_handler("ring_sp_mc");
+	else if (flags & MEMPOOL_F_SC_GET)
+		mp->handler_idx = rte_get_mempool_handler("ring_mp_sc");
+	else
+		mp->handler_idx = rte_get_mempool_handler("ring_mp_mc");
+
 	/* calculate address of the first element for continuous mempool. */
 	obj = (char *)mp + MEMPOOL_HEADER_SIZE(mp, pg_num) +
 		private_data_size;
@@ -613,7 +599,6 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 		mp->elt_va_start = (uintptr_t)obj;
 		mp->elt_pa[0] = mp->phys_addr +
 			(mp->elt_va_start - (uintptr_t)mp);
-
 	/* mempool elements in a separate chunk of memory. */
 	} else {
 		mp->elt_va_start = (uintptr_t)vaddr;
@@ -622,6 +607,10 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 
 	mp->elt_va_end = mp->elt_va_start;
 
+	/* Parameters are setup. Call the mempool handler alloc */
+	if ((rte_mempool_ext_alloc(mp, name, n, socket_id, flags)) == NULL)
+		goto exit;
+
 	/* call the initializer */
 	if (mp_init)
 		mp_init(mp, mp_init_arg);
@@ -646,7 +635,7 @@ rte_mempool_count(const struct rte_mempool *mp)
 {
 	unsigned count;
 
-	count = rte_ring_count(mp->ring);
+	count = rte_mempool_ext_get_count(mp);
 
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
 	{
@@ -681,7 +670,9 @@ rte_mempool_dump_cache(FILE *f, const struct rte_mempool *mp)
 	fprintf(f, "    cache_size=%"PRIu32"\n", mp->cache_size);
 	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
 		cache_count = mp->local_cache[lcore_id].len;
-		fprintf(f, "    cache_count[%u]=%u\n", lcore_id, cache_count);
+		if (cache_count > 0)
+			fprintf(f, "    cache_count[%u]=%u\n",
+						lcore_id, cache_count);
 		count += cache_count;
 	}
 	fprintf(f, "    total_cache_count=%u\n", count);
@@ -802,14 +793,13 @@ rte_mempool_dump(FILE *f, const struct rte_mempool *mp)
 
 	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
 	fprintf(f, "  flags=%x\n", mp->flags);
-	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
 	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->phys_addr);
 	fprintf(f, "  size=%"PRIu32"\n", mp->size);
 	fprintf(f, "  header_size=%"PRIu32"\n", mp->header_size);
 	fprintf(f, "  elt_size=%"PRIu32"\n", mp->elt_size);
 	fprintf(f, "  trailer_size=%"PRIu32"\n", mp->trailer_size);
 	fprintf(f, "  total_obj_size=%"PRIu32"\n",
-	       mp->header_size + mp->elt_size + mp->trailer_size);
+		   mp->header_size + mp->elt_size + mp->trailer_size);
 
 	fprintf(f, "  private_data_size=%"PRIu32"\n", mp->private_data_size);
 	fprintf(f, "  pg_num=%"PRIu32"\n", mp->pg_num);
@@ -825,7 +815,7 @@ rte_mempool_dump(FILE *f, const struct rte_mempool *mp)
 			mp->size);
 
 	cache_count = rte_mempool_dump_cache(f, mp);
-	common_count = rte_ring_count(mp->ring);
+	common_count = /* rte_ring_count(mp->ring)*/0;
 	if ((cache_count + common_count) > mp->size)
 		common_count = mp->size - cache_count;
 	fprintf(f, "  common_pool_count=%u\n", common_count);
@@ -904,7 +894,7 @@ rte_mempool_lookup(const char *name)
 }
 
 void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *),
-		      void *arg)
+			  void *arg)
 {
 	struct rte_tailq_entry *te = NULL;
 	struct rte_mempool_list *mempool_list;
@@ -919,3 +909,111 @@ void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *),
 
 	rte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK);
 }
+
+
+/* create the mempool using and external mempool manager */
+struct rte_mempool *
+rte_mempool_create_ext(const char *name, unsigned n, unsigned elt_size,
+			unsigned cache_size, unsigned private_data_size,
+			rte_mempool_ctor_t *mp_init, void *mp_init_arg,
+			rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
+			int socket_id, unsigned flags,
+			const char *handler_name)
+{
+	char mz_name[RTE_MEMZONE_NAMESIZE];
+	struct rte_mempool_list *mempool_list;
+	struct rte_mempool *mp = NULL;
+	struct rte_tailq_entry *te;
+	const struct rte_memzone *mz;
+	size_t mempool_size;
+	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
+	int rg_flags = 0;
+	int16_t handler_idx;
+
+	mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list);
+
+	/* asked cache too big */
+	if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE ||
+		CALC_CACHE_FLUSHTHRESH(cache_size) > n) {
+		rte_errno = EINVAL;
+		return NULL;
+	}
+
+	handler_idx = rte_get_mempool_handler(handler_name);
+	if (handler_idx < 0) {
+		RTE_LOG(ERR, MEMPOOL, "Cannot find mempool handler by name!\n");
+		goto exit;
+	}
+
+	/* ring flags */
+	if (flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	rte_rwlock_write_lock(RTE_EAL_MEMPOOL_RWLOCK);
+
+	/*
+	 * reserve a memory zone for this mempool: private data is
+	 * cache-aligned
+	 */
+	private_data_size = RTE_ALIGN_CEIL(private_data_size,
+							RTE_MEMPOOL_ALIGN);
+
+	/* try to allocate tailq entry */
+	te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0);
+	if (te == NULL) {
+		RTE_LOG(ERR, MEMPOOL, "Cannot allocate tailq entry!\n");
+		goto exit;
+	}
+
+	/*
+	 * If user provided an external memory buffer, then use it to
+	 * store mempool objects. Otherwise reserve a memzone that is large
+	 * enough to hold mempool header and metadata plus mempool objects.
+	 */
+	mempool_size = sizeof(*mp) + private_data_size;
+	mempool_size = RTE_ALIGN_CEIL(mempool_size, RTE_MEMPOOL_ALIGN);
+
+	snprintf(mz_name, sizeof(mz_name), RTE_MEMPOOL_MZ_FORMAT, name);
+
+	mz = rte_memzone_reserve(mz_name, mempool_size, socket_id, mz_flags);
+
+	/* no more memory */
+	if (mz == NULL) {
+		rte_free(te);
+		goto exit;
+	}
+
+	/* init the mempool structure */
+	mp = mz->addr;
+	memset(mp, 0, sizeof(*mp));
+	snprintf(mp->name, sizeof(mp->name), "%s", name);
+	mp->phys_addr = mz->phys_addr;
+	mp->size = n;
+	mp->flags = flags;
+	mp->cache_size = cache_size;
+	mp->cache_flushthresh = CALC_CACHE_FLUSHTHRESH(cache_size);
+	mp->private_data_size = private_data_size;
+	mp->handler_idx = handler_idx;
+	mp->elt_size = elt_size;
+	mp->rt_pool = rte_mempool_ext_alloc(mp, name, n, socket_id, flags);
+
+	/* call the initializer */
+	if (mp_init)
+		mp_init(mp, mp_init_arg);
+
+	mempool_populate(mp, n, 1, obj_init, obj_init_arg);
+
+	te->data = (void *) mp;
+
+	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+	TAILQ_INSERT_TAIL(mempool_list, te, next);
+	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+exit:
+	rte_rwlock_write_unlock(RTE_EAL_MEMPOOL_RWLOCK);
+
+	return mp;
+
+}
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 6e2390a..620cfb7 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -88,6 +88,8 @@ extern "C" {
 struct rte_mempool_debug_stats {
 	uint64_t put_bulk;         /**< Number of puts. */
 	uint64_t put_objs;         /**< Number of objects successfully put. */
+	uint64_t put_pool_bulk;    /**< Number of puts into pool. */
+	uint64_t put_pool_objs;    /**< Number of objects into pool. */
 	uint64_t get_success_bulk; /**< Successful allocation number. */
 	uint64_t get_success_objs; /**< Objects successfully allocated. */
 	uint64_t get_fail_bulk;    /**< Failed allocation number. */
@@ -123,6 +125,7 @@ struct rte_mempool_objsz {
 #define RTE_MEMPOOL_NAMESIZE 32 /**< Maximum length of a memory pool. */
 #define RTE_MEMPOOL_MZ_PREFIX "MP_"
 
+
 /* "MP_<name>" */
 #define	RTE_MEMPOOL_MZ_FORMAT	RTE_MEMPOOL_MZ_PREFIX "%s"
 
@@ -175,12 +178,85 @@ struct rte_mempool_objtlr {
 #endif
 };
 
+/* Handler functions for external mempool support */
+typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp,
+		const char *name, unsigned n, int socket_id, unsigned flags);
+typedef int (*rte_mempool_put_t)(void *p,
+		void * const *obj_table, unsigned n);
+typedef int (*rte_mempool_get_t)(void *p, void **obj_table,
+		unsigned n);
+typedef unsigned (*rte_mempool_get_count)(void *p);
+typedef int(*rte_mempool_free_t)(struct rte_mempool *mp);
+
+/**
+ * @internal wrapper for external mempool manager alloc callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param name
+ *   Name of the statistics field to increment in the memory pool.
+ * @param n
+ *   Number to add to the object-oriented statistics.
+ * @param socket_id
+ *   socket id on which to allocate.
+ * @param flags
+ *   general flags to allocate function
+ */
+void *
+rte_mempool_ext_alloc(struct rte_mempool *mp,
+		const char *name, unsigned n, int socket_id, unsigned flags);
+
+/**
+ * @internal wrapper for external mempool manager get callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *	 Number of objects to get
+ */
+int
+rte_mempool_ext_get_bulk(struct rte_mempool *mp, void **obj_table,
+		unsigned n);
+
+/**
+ * @internal wrapper for external mempool manager put callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to put
+ */
+int
+rte_mempool_ext_put_bulk(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n);
+
+/**
+ * @internal wrapper for external mempool manager get_count callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ */
+int
+rte_mempool_ext_get_count(const struct rte_mempool *mp);
+
+/**
+ * @internal wrapper for external mempool manager free callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ */
+int
+rte_mempool_ext_free(struct rte_mempool *mp);
+
 /**
  * The RTE mempool structure.
  */
 struct rte_mempool {
 	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
-	struct rte_ring *ring;           /**< Ring to store objects. */
 	phys_addr_t phys_addr;           /**< Phys. addr. of mempool struct. */
 	int flags;                       /**< Flags of the mempool. */
 	uint32_t size;                   /**< Size of the mempool. */
@@ -194,6 +270,11 @@ struct rte_mempool {
 
 	unsigned private_data_size;      /**< Size of private data. */
 
+	/* Common pool data structure pointer */
+	void *rt_pool __rte_cache_aligned;
+
+	int16_t handler_idx;
+
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
 	/** Per-lcore local cache. */
 	struct rte_mempool_cache local_cache[RTE_MAX_LCORE];
@@ -223,6 +304,10 @@ struct rte_mempool {
 #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
 #define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
+#define MEMPOOL_F_USE_STACK      0x0010 /**< Use a stack for the common pool. */
+#define MEMPOOL_F_USE_TM         0x0020
+#define MEMPOOL_F_NO_SECONDARY   0x0040
+
 
 /**
  * @internal When debug is enabled, store some statistics.
@@ -728,7 +813,6 @@ rte_dom0_mempool_create(const char *name, unsigned n, unsigned elt_size,
 		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
 		int socket_id, unsigned flags);
 
-
 /**
  * Dump the status of the mempool to the console.
  *
@@ -753,7 +837,7 @@ void rte_mempool_dump(FILE *f, const struct rte_mempool *mp);
  */
 static inline void __attribute__((always_inline))
 __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
-		    unsigned n, int is_mp)
+		    unsigned n, __attribute__((unused)) int is_mp)
 {
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
 	struct rte_mempool_cache *cache;
@@ -769,8 +853,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
 	/* cache is not enabled or single producer or non-EAL thread */
-	if (unlikely(cache_size == 0 || is_mp == 0 ||
-		     lcore_id >= RTE_MAX_LCORE))
+	if (unlikely(cache_size == 0 || lcore_id >= RTE_MAX_LCORE))
 		goto ring_enqueue;
 
 	/* Go straight to ring if put would overflow mem allocated for cache */
@@ -793,8 +876,8 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 
 	cache->len += n;
 
-	if (cache->len >= flushthresh) {
-		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
+	if (unlikely(cache->len >= flushthresh)) {
+		rte_mempool_ext_put_bulk(mp, &cache->objs[cache_size],
 				cache->len - cache_size);
 		cache->len = cache_size;
 	}
@@ -804,22 +887,10 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 ring_enqueue:
 #endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
 
-	/* push remaining objects in ring */
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	if (is_mp) {
-		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
-	else {
-		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
-#else
-	if (is_mp)
-		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
-	else
-		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
-#endif
+	/* Increment stats counter to tell us how many pool puts happened */
+	__MEMPOOL_STAT_ADD(mp, put_pool, n);
+
+	rte_mempool_ext_put_bulk(mp, obj_table, n);
 }
 
 
@@ -943,7 +1014,7 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
  */
 static inline int __attribute__((always_inline))
 __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
-		   unsigned n, int is_mc)
+		   unsigned n, __attribute__((unused))int is_mc)
 {
 	int ret;
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
@@ -954,8 +1025,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 	uint32_t cache_size = mp->cache_size;
 
 	/* cache is not enabled or single consumer */
-	if (unlikely(cache_size == 0 || is_mc == 0 ||
-		     n >= cache_size || lcore_id >= RTE_MAX_LCORE))
+	if (unlikely(cache_size == 0 || n >= cache_size ||
+						lcore_id >= RTE_MAX_LCORE))
 		goto ring_dequeue;
 
 	cache = &mp->local_cache[lcore_id];
@@ -967,7 +1038,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 		uint32_t req = n + (cache_size - cache->len);
 
 		/* How many do we require i.e. number to fill the cache + the request */
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
+		ret = rte_mempool_ext_get_bulk(mp,
+						&cache->objs[cache->len], req);
 		if (unlikely(ret < 0)) {
 			/*
 			 * In the offchance that we are buffer constrained,
@@ -995,10 +1067,7 @@ ring_dequeue:
 #endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
 
 	/* get remaining objects from ring */
-	if (is_mc)
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
-	else
-		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
+	ret = rte_mempool_ext_get_bulk(mp, obj_table, n);
 
 	if (ret < 0)
 		__MEMPOOL_STAT_ADD(mp, get_fail, n);
@@ -1401,6 +1470,82 @@ ssize_t rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz,
 void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *arg),
 		      void *arg);
 
+/**
+ * Function to get an index to an external mempool manager
+ *
+ * @param name
+ *   The name of the mempool handler to search for in the list of handlers
+ * @return
+ *   The index of the mempool handler in the list of registered mempool
+ *   handlers
+ */
+int16_t
+rte_get_mempool_handler(const char *name);
+
+
+/**
+ * Create a new mempool named *name* in memory.
+ *
+ * This function uses an externally defined alloc callback to allocate memory.
+ * Its size is set to n elements.
+ * All elements of the mempool are allocated separately to the mempool header.
+ *
+ * @param name
+ *   The name of the mempool.
+ * @param n
+ *   The number of elements in the mempool. The optimum size (in terms of
+ *   memory usage) for a mempool is when n is a power of two minus one:
+ *   n = (2^q - 1).
+ * @param cache_size
+ *   If cache_size is non-zero, the rte_mempool library will try to
+ *   limit the accesses to the common lockless pool, by maintaining a
+ *   per-lcore object cache. This argument must be lower or equal to
+ *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5. It is advised to choose
+ *   cache_size to have "n modulo cache_size == 0": if this is
+ *   not the case, some elements will always stay in the pool and will
+ *   never be used. The access to the per-lcore table is of course
+ *   faster than the multi-producer/consumer pool. The cache can be
+ *   disabled if the cache_size argument is set to 0; it can be useful to
+ *   avoid losing objects in cache. Note that even if not used, the
+ *   memory space for cache is always reserved in a mempool structure,
+ *   except if CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE is set to 0.
+ * @param private_data_size
+ *   The size of the private data appended after the mempool
+ *   structure. This is useful for storing some private data after the
+ *   mempool structure, as is done for rte_mbuf_pool for example.
+ * @param mp_init
+ *   A function pointer that is called for initialization of the pool,
+ *   before object initialization. The user can initialize the private
+ *   data in this function if needed. This parameter can be NULL if
+ *   not needed.
+ * @param mp_init_arg
+ *   An opaque pointer to data that can be used in the mempool
+ *   constructor function.
+ * @param obj_init
+ *   A function pointer that is called for each object at
+ *   initialization of the pool. The user can set some meta data in
+ *   objects if needed. This parameter can be NULL if not needed.
+ *   The obj_init() function takes the mempool pointer, the init_arg,
+ *   the object pointer and the object number as parameters.
+ * @param obj_init_arg
+ *   An opaque pointer to data that can be used as an argument for
+ *   each call to the object constructor function.
+ * @param socket_id
+ *   The *socket_id* argument is the socket identifier in the case of
+ *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ *   constraint for the reserved zone.
+ * @param flags
+ * @return
+ *   The pointer to the new allocated mempool, on success. NULL on error
+ */
+struct rte_mempool *
+rte_mempool_create_ext(const char *name, unsigned n, unsigned elt_size,
+		unsigned cache_size, unsigned private_data_size,
+		rte_mempool_ctor_t *mp_init, void *mp_init_arg,
+		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
+		int socket_id, unsigned flags,
+		const char *handler_name);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_mempool/rte_mempool_default.c b/lib/librte_mempool/rte_mempool_default.c
new file mode 100644
index 0000000..2493dc1
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_default.c
@@ -0,0 +1,229 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <string.h>
+
+#include "rte_mempool_internal.h"
+
+/*
+ * Indirect jump table to support external memory pools
+ */
+struct rte_mempool_handler_list mempool_handler_list = {
+	.sl =  RTE_SPINLOCK_INITIALIZER ,
+	.num_handlers = 0
+};
+
+/* TODO Convert to older mechanism of an array of stucts */
+int16_t
+add_handler(struct rte_mempool_handler *h)
+{
+	int16_t handler_idx;
+
+	/*  */
+	rte_spinlock_lock(&mempool_handler_list.sl);
+
+	/* Check whether jump table has space */
+	if (mempool_handler_list.num_handlers >= RTE_MEMPOOL_MAX_HANDLER_IDX) {
+		rte_spinlock_unlock(&mempool_handler_list.sl);
+		RTE_LOG(ERR, MEMPOOL,
+				"Maximum number of mempool handlers exceeded\n");
+		return -1;
+	}
+
+	if ((h->put == NULL) || (h->get == NULL) ||
+		(h->get_count == NULL)) {
+		rte_spinlock_unlock(&mempool_handler_list.sl);
+		 RTE_LOG(ERR, MEMPOOL,
+					"Missing callback while registering mempool handler\n");
+		return -1;
+	}
+
+	/* add new handler index */
+	handler_idx = mempool_handler_list.num_handlers++;
+
+	snprintf(mempool_handler_list.handler[handler_idx].name,
+				RTE_MEMPOOL_NAMESIZE, "%s", h->name);
+	mempool_handler_list.handler[handler_idx].alloc = h->alloc;
+	mempool_handler_list.handler[handler_idx].put = h->put;
+	mempool_handler_list.handler[handler_idx].get = h->get;
+	mempool_handler_list.handler[handler_idx].get_count = h->get_count;
+
+	rte_spinlock_unlock(&mempool_handler_list.sl);
+
+	return handler_idx;
+}
+
+/* TODO Convert to older mechanism of an array of stucts */
+int16_t
+rte_get_mempool_handler(const char *name)
+{
+	int16_t i;
+
+	for (i = 0; i < mempool_handler_list.num_handlers; i++) {
+		if (!strcmp(name, mempool_handler_list.handler[i].name))
+			return i;
+	}
+	return -1;
+}
+
+static int
+common_ring_mp_put(void *p, void * const *obj_table, unsigned n)
+{
+	return rte_ring_mp_enqueue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static int
+common_ring_sp_put(void *p, void * const *obj_table, unsigned n)
+{
+	return rte_ring_sp_enqueue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static int
+common_ring_mc_get(void *p, void **obj_table, unsigned n)
+{
+	return rte_ring_mc_dequeue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static int
+common_ring_sc_get(void *p, void **obj_table, unsigned n)
+{
+	return rte_ring_sc_dequeue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static unsigned
+common_ring_get_count(void *p)
+{
+	return rte_ring_count((struct rte_ring *)p);
+}
+
+
+static void *
+rte_mempool_common_ring_alloc(struct rte_mempool *mp,
+		const char *name, unsigned n, int socket_id, unsigned flags)
+{
+	struct rte_ring *r;
+	char rg_name[RTE_RING_NAMESIZE];
+	int rg_flags = 0;
+
+	if (flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	/* allocate the ring that will be used to store objects */
+	/* Ring functions will return appropriate errors if we are
+	 * running as a secondary process etc., so no checks made
+	 * in this function for that condition */
+	snprintf(rg_name, sizeof(rg_name), "%s-ring", name);
+	r = rte_ring_create(rg_name, rte_align32pow2(n+1), socket_id, rg_flags);
+	if (r == NULL)
+		return NULL;
+
+	mp->rt_pool = (void *)r;
+
+	return (void *) r;
+}
+
+void *
+rte_mempool_ext_alloc(struct rte_mempool *mp,
+		const char *name, unsigned n, int socket_id, unsigned flags)
+{
+	if (mempool_handler_list.handler[mp->handler_idx].alloc) {
+		return (mempool_handler_list.handler[mp->handler_idx].alloc)
+						(mp, name, n, socket_id, flags);
+	}
+	return NULL;
+}
+
+inline int __attribute__((always_inline))
+rte_mempool_ext_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	return (mempool_handler_list.handler[mp->handler_idx].get)
+						(mp->rt_pool, obj_table, n);
+}
+
+inline int __attribute__((always_inline))
+rte_mempool_ext_put_bulk(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	return (mempool_handler_list.handler[mp->handler_idx].put)
+						(mp->rt_pool, obj_table, n);
+}
+
+int
+rte_mempool_ext_get_count(const struct rte_mempool *mp)
+{
+	return (mempool_handler_list.handler[mp->handler_idx].get_count)
+						(mp->rt_pool);
+}
+
+static struct rte_mempool_handler handler_mp_mc = {
+	.name = "ring_mp_mc",
+	.alloc = rte_mempool_common_ring_alloc,
+	.put = common_ring_mp_put,
+	.get = common_ring_mc_get,
+	.get_count = common_ring_get_count,
+	.free = NULL
+};
+static struct rte_mempool_handler handler_sp_sc = {
+	.name = "ring_sp_sc",
+	.alloc = rte_mempool_common_ring_alloc,
+	.put = common_ring_sp_put,
+	.get = common_ring_sc_get,
+	.get_count = common_ring_get_count,
+	.free = NULL
+};
+static struct rte_mempool_handler handler_mp_sc = {
+	.name = "ring_mp_sc",
+	.alloc = rte_mempool_common_ring_alloc,
+	.put = common_ring_mp_put,
+	.get = common_ring_sc_get,
+	.get_count = common_ring_get_count,
+	.free = NULL
+};
+static struct rte_mempool_handler handler_sp_mc = {
+	.name = "ring_sp_mc",
+	.alloc = rte_mempool_common_ring_alloc,
+	.put = common_ring_sp_put,
+	.get = common_ring_mc_get,
+	.get_count = common_ring_get_count,
+	.free = NULL
+};
+
+REGISTER_MEMPOOL_HANDLER(handler_mp_mc);
+REGISTER_MEMPOOL_HANDLER(handler_sp_sc);
+REGISTER_MEMPOOL_HANDLER(handler_mp_sc);
+REGISTER_MEMPOOL_HANDLER(handler_sp_mc);
diff --git a/lib/librte_mempool/rte_mempool_internal.h b/lib/librte_mempool/rte_mempool_internal.h
new file mode 100644
index 0000000..92b7bde
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_internal.h
@@ -0,0 +1,74 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_MEMPOOL_INTERNAL_H_
+#define _RTE_MEMPOOL_INTERNAL_H_
+
+#include <rte_spinlock.h>
+#include <rte_mempool.h>
+
+#define RTE_MEMPOOL_MAX_HANDLER_IDX 16
+
+struct rte_mempool_handler {
+	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool handler */
+
+	rte_mempool_alloc_t alloc;
+
+	rte_mempool_put_t put __rte_cache_aligned;
+
+	rte_mempool_get_t get __rte_cache_aligned;
+
+	rte_mempool_get_count get_count __rte_cache_aligned;
+
+	rte_mempool_free_t free __rte_cache_aligned;
+};
+
+struct rte_mempool_handler_list {
+	rte_spinlock_t sl;		  /**< Spinlock for add/delete. */
+
+	int32_t num_handlers;	  /**< Number of handlers that are valid. */
+
+	/* storage for all possible handlers */
+	struct rte_mempool_handler handler[RTE_MEMPOOL_MAX_HANDLER_IDX];
+};
+
+int16_t add_handler(struct rte_mempool_handler *h);
+
+#define REGISTER_MEMPOOL_HANDLER(h) \
+static int16_t __attribute__((used)) testfn_##h(void);\
+int16_t __attribute__((constructor, used)) testfn_##h(void)\
+{\
+	return add_handler(&h);\
+}
+
+#endif
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH 2/5] memool: add stack (lifo) based external mempool handler
  2016-01-26 17:25 [PATCH 0/5] add external mempool manager David Hunt
  2016-01-26 17:25 ` [PATCH 1/5] mempool: add external mempool manager support David Hunt
@ 2016-01-26 17:25 ` David Hunt
  2016-02-04 15:02   ` Olivier MATZ
  2016-01-26 17:25 ` [PATCH 3/5] mempool: add custom external mempool handler example David Hunt
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 238+ messages in thread
From: David Hunt @ 2016-01-26 17:25 UTC (permalink / raw)
  To: dev

adds a simple stack based mempool handler

Signed-off-by: David Hunt <david.hunt@intel.com>
---
 app/test/test_mempool_perf.c           |   1 -
 lib/librte_mempool/Makefile            |   1 +
 lib/librte_mempool/rte_mempool_stack.c | 167 +++++++++++++++++++++++++++++++++
 3 files changed, 168 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_mempool/rte_mempool_stack.c

diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index 091c1df..c5a1d2a 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -52,7 +52,6 @@
 #include <rte_lcore.h>
 #include <rte_atomic.h>
 #include <rte_branch_prediction.h>
-#include <rte_ring.h>
 #include <rte_mempool.h>
 #include <rte_spinlock.h>
 #include <rte_malloc.h>
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 7c81ef6..d795b48 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -43,6 +43,7 @@ LIBABIVER := 1
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_stack.c
 ifeq ($(CONFIG_RTE_LIBRTE_XEN_DOM0),y)
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_dom0_mempool.c
 endif
diff --git a/lib/librte_mempool/rte_mempool_stack.c b/lib/librte_mempool/rte_mempool_stack.c
new file mode 100644
index 0000000..c7d232e
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_stack.c
@@ -0,0 +1,167 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <string.h>
+
+#include "rte_mempool_internal.h"
+
+struct rte_mempool_common_stack {
+	/* Spinlock to protect access */
+	rte_spinlock_t sl;
+
+	uint32_t size;
+	uint32_t len;
+	void *objs[];
+
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+#endif
+};
+
+static void *
+common_stack_alloc(struct rte_mempool *mp,
+		const char *name, unsigned n, int socket_id, unsigned flags)
+{
+	struct rte_mempool_common_stack *s;
+	char stack_name[RTE_RING_NAMESIZE];
+
+	int size = sizeof(*s) + (n+16)*sizeof(void *);
+
+	flags = flags;
+
+	/* Allocate our local memory structure */
+	snprintf(stack_name, sizeof(stack_name), "%s-common-stack", name);
+	s = rte_zmalloc_socket(stack_name,
+					size, RTE_CACHE_LINE_SIZE, socket_id);
+	if (s == NULL) {
+		RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n");
+		return NULL;
+	}
+
+	/* And the spinlock we use to protect access */
+	rte_spinlock_init(&s->sl);
+
+	s->size = n;
+	mp->rt_pool = (void *) s;
+	mp->handler_idx = rte_get_mempool_handler("stack");
+
+	return (void *) s;
+}
+
+static int common_stack_put(void *p, void * const *obj_table,
+		unsigned n)
+{
+	struct rte_mempool_common_stack *s =
+				(struct rte_mempool_common_stack *)p;
+	void **cache_objs;
+	unsigned index;
+
+	/* Acquire lock */
+	rte_spinlock_lock(&s->sl);
+	cache_objs = &s->objs[s->len];
+
+	/* Is there sufficient space in the stack ? */
+	if ((s->len + n) > s->size) {
+		rte_spinlock_unlock(&s->sl);
+		return -ENOENT;
+	}
+
+	/* Add elements back into the cache */
+	for (index = 0; index < n; ++index, obj_table++)
+		cache_objs[index] = *obj_table;
+
+	s->len += n;
+
+	rte_spinlock_unlock(&s->sl);
+	return 0;
+}
+
+static int common_stack_get(void *p, void **obj_table,
+		unsigned n)
+{
+	struct rte_mempool_common_stack *s =
+					(struct rte_mempool_common_stack *)p;
+	void **cache_objs;
+	unsigned index, len;
+
+	/* Acquire lock */
+	rte_spinlock_lock(&s->sl);
+
+	if (unlikely(n > s->len)) {
+		rte_spinlock_unlock(&s->sl);
+		return -ENOENT;
+	}
+
+	cache_objs = s->objs;
+
+	for (index = 0, len = s->len - 1; index < n;
+					++index, len--, obj_table++)
+		*obj_table = cache_objs[len];
+
+	s->len -= n;
+	rte_spinlock_unlock(&s->sl);
+	return n;
+}
+
+static unsigned common_stack_get_count(void *p)
+{
+	struct rte_mempool_common_stack *s =
+					(struct rte_mempool_common_stack *)p;
+
+	return s->len;
+}
+
+static int
+common_stack_free(struct rte_mempool *mp)
+{
+	struct rte_mempool_common_stack *s;
+
+	s = mp->rt_pool;
+
+	rte_free(s);
+
+	return 0;
+}
+
+static struct rte_mempool_handler handler_stack = {
+	.name = "stack",
+	.alloc = common_stack_alloc,
+	.put = common_stack_put,
+	.get = common_stack_get,
+	.get_count = common_stack_get_count,
+	.free = common_stack_free
+};
+
+REGISTER_MEMPOOL_HANDLER(handler_stack);
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH 3/5] mempool: add custom external mempool handler example
  2016-01-26 17:25 [PATCH 0/5] add external mempool manager David Hunt
  2016-01-26 17:25 ` [PATCH 1/5] mempool: add external mempool manager support David Hunt
  2016-01-26 17:25 ` [PATCH 2/5] memool: add stack (lifo) based external mempool handler David Hunt
@ 2016-01-26 17:25 ` David Hunt
  2016-01-28 17:54   ` Jerin Jacob
  2016-01-26 17:25 ` [PATCH 4/5] mempool: add autotest for external mempool custom example David Hunt
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 238+ messages in thread
From: David Hunt @ 2016-01-26 17:25 UTC (permalink / raw)
  To: dev

adds a simple ring-based mempool handler using mallocs for each object

Signed-off-by: David Hunt <david.hunt@intel.com>
---
 lib/librte_mempool/Makefile         |   1 +
 lib/librte_mempool/custom_mempool.c | 160 ++++++++++++++++++++++++++++++++++++
 2 files changed, 161 insertions(+)
 create mode 100644 lib/librte_mempool/custom_mempool.c

diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index d795b48..4f72546 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -44,6 +44,7 @@ LIBABIVER := 1
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  custom_mempool.c
 ifeq ($(CONFIG_RTE_LIBRTE_XEN_DOM0),y)
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_dom0_mempool.c
 endif
diff --git a/lib/librte_mempool/custom_mempool.c b/lib/librte_mempool/custom_mempool.c
new file mode 100644
index 0000000..a9da8c5
--- /dev/null
+++ b/lib/librte_mempool/custom_mempool.c
@@ -0,0 +1,160 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <sys/queue.h>
+
+#include <rte_mempool.h>
+
+#include "rte_mempool_internal.h"
+
+/*
+ * Mempool
+ * =======
+ *
+ * Basic tests: done on one core with and without cache:
+ *
+ *    - Get one object, put one object
+ *    - Get two objects, put two objects
+ *    - Get all objects, test that their content is not modified and
+ *      put them back in the pool.
+ */
+
+#define TIME_S 5
+#define MEMPOOL_ELT_SIZE 2048
+#define MAX_KEEP 128
+#define MEMPOOL_SIZE 8192
+
+#if 0
+/*
+ * For our example mempool handler, we use the following struct to
+ * pass info to our create callback so it can call rte_mempool_create
+ */
+struct custom_mempool_alloc_params {
+	char ring_name[RTE_RING_NAMESIZE];
+	unsigned n_elt;
+	unsigned elt_size;
+};
+#endif
+
+/*
+ * Simple example of custom mempool structure. Holds pointers to all the
+ * elements which are simply malloc'd in this example.
+ */
+struct custom_mempool {
+	struct rte_ring *r;             /* Ring to manage elements */
+	void *elements[MEMPOOL_SIZE];   /* Element pointers */
+};
+
+/*
+ * Loop though all the element pointers and allocate a chunk of memory, then
+ * insert that memory into the ring.
+ */
+static void *
+custom_mempool_alloc(struct rte_mempool *mp,
+		const char *name, unsigned n,
+		__attribute__((unused)) int socket_id,
+		__attribute__((unused)) unsigned flags)
+
+{
+	static struct custom_mempool *cm;
+	uint32_t *objnum;
+	unsigned int i;
+
+	cm = malloc(sizeof(struct custom_mempool));
+
+	/* Create the ring so we can enqueue/dequeue */
+	cm->r = rte_ring_create(name,
+						rte_align32pow2(n+1), 0, 0);
+	if (cm->r == NULL)
+		return NULL;
+
+	/*
+	 * Loop around the elements an allocate the required memory
+	 * and place them in the ring.
+	 * Not worried about alignment or performance for this example.
+	 * Also, set the first 32-bits to be the element number so we
+	 * can check later on.
+	 */
+	for (i = 0; i < n; i++) {
+		cm->elements[i] = malloc(mp->elt_size);
+		memset(cm->elements[i], 0, mp->elt_size);
+		objnum = (uint32_t *)cm->elements[i];
+		*objnum = i;
+		rte_ring_sp_enqueue_bulk(cm->r, &(cm->elements[i]), 1);
+	}
+
+	return cm;
+}
+
+static int
+custom_mempool_put(void *p, void * const *obj_table, unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)p;
+
+	return rte_ring_mp_enqueue_bulk(cm->r, obj_table, n);
+}
+
+
+static int
+custom_mempool_get(void *p, void **obj_table, unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)p;
+
+	return rte_ring_mc_dequeue_bulk(cm->r, obj_table, n);
+}
+
+static unsigned
+custom_mempool_get_count(void *p)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)p;
+
+	return rte_ring_count(cm->r);
+}
+
+static struct rte_mempool_handler mempool_handler_custom = {
+	.name = "custom_handler",
+	.alloc = custom_mempool_alloc,
+	.put = custom_mempool_put,
+	.get = custom_mempool_get,
+	.get_count = custom_mempool_get_count,
+};
+
+REGISTER_MEMPOOL_HANDLER(mempool_handler_custom);
+
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH 4/5] mempool: add autotest for external mempool custom example
  2016-01-26 17:25 [PATCH 0/5] add external mempool manager David Hunt
                   ` (2 preceding siblings ...)
  2016-01-26 17:25 ` [PATCH 3/5] mempool: add custom external mempool handler example David Hunt
@ 2016-01-26 17:25 ` David Hunt
  2016-01-26 17:25 ` [PATCH 5/5] mempool: allow rte_pktmbuf_pool_create switch between memool handlers David Hunt
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-01-26 17:25 UTC (permalink / raw)
  To: dev

Signed-off-by: David Hunt <david.hunt@intel.com>
---
 app/test/Makefile           |   1 +
 app/test/test_ext_mempool.c | 474 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 475 insertions(+)
 create mode 100644 app/test/test_ext_mempool.c

diff --git a/app/test/Makefile b/app/test/Makefile
index ec33e1a..9a2f75f 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -74,6 +74,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
 SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_racecond.c
 
 SRCS-y += test_mempool.c
+SRCS-y += test_ext_mempool.c
 SRCS-y += test_mempool_perf.c
 
 SRCS-y += test_mbuf.c
diff --git a/app/test/test_ext_mempool.c b/app/test/test_ext_mempool.c
new file mode 100644
index 0000000..b434f8b
--- /dev/null
+++ b/app/test/test_ext_mempool.c
@@ -0,0 +1,474 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <sys/queue.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_launch.h>
+#include <rte_cycles.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_spinlock.h>
+#include <rte_malloc.h>
+
+#include "test.h"
+
+/*
+ * Mempool
+ * =======
+ *
+ * Basic tests: done on one core with and without cache:
+ *
+ *    - Get one object, put one object
+ *    - Get two objects, put two objects
+ *    - Get all objects, test that their content is not modified and
+ *      put them back in the pool.
+ */
+
+#define TIME_S 5
+#define MEMPOOL_ELT_SIZE 2048
+#define MAX_KEEP 128
+#define MEMPOOL_SIZE 8192
+
+static struct rte_mempool *mp;
+static struct rte_mempool *ext_nocache, *ext_cache;
+
+static rte_atomic32_t synchro;
+
+/*
+ * For our tests, we use the following struct to pass info to our create
+ *  callback so it can call rte_mempool_create
+ */
+struct custom_mempool_alloc_params {
+	char ring_name[RTE_RING_NAMESIZE];
+	unsigned n_elt;
+	unsigned elt_size;
+};
+
+/*
+ * Simple example of custom mempool structure. Holds pointers to all the
+ * elements which are simply malloc'd in this example.
+ */
+struct custom_mempool {
+	struct rte_ring *r;            /* Ring to manage elements */
+	void *elements[MEMPOOL_SIZE];  /* Element pointers */
+};
+
+/*
+ * save the object number in the first 4 bytes of object data. All
+ * other bytes are set to 0.
+ */
+static void
+my_obj_init(struct rte_mempool *mp, __attribute__((unused)) void *arg,
+		void *obj, unsigned i)
+{
+	uint32_t *objnum = obj;
+
+	memset(obj, 0, mp->elt_size);
+	*objnum = i;
+	printf("Setting objnum to %d\n", i);
+}
+
+/* basic tests (done on one core) */
+static int
+test_mempool_basic(void)
+{
+	uint32_t *objnum;
+	void **objtable;
+	void *obj, *obj2;
+	char *obj_data;
+	int ret = 0;
+	unsigned i, j;
+
+	/* dump the mempool status */
+	rte_mempool_dump(stdout, mp);
+
+	printf("Count = %d\n", rte_mempool_count(mp));
+	printf("get an object\n");
+	if (rte_mempool_get(mp, &obj) < 0) {
+		printf("get Failed\n");
+		return -1;
+	}
+	printf("Count = %d\n", rte_mempool_count(mp));
+	rte_mempool_dump(stdout, mp);
+
+	/* tests that improve coverage */
+	printf("get object count\n");
+	if (rte_mempool_count(mp) != MEMPOOL_SIZE - 1)
+		return -1;
+
+	printf("get private data\n");
+	if (rte_mempool_get_priv(mp) !=
+			(char *) mp + MEMPOOL_HEADER_SIZE(mp, mp->pg_num))
+		return -1;
+
+	printf("get physical address of an object\n");
+	if (MEMPOOL_IS_CONTIG(mp) &&
+			rte_mempool_virt2phy(mp, obj) !=
+			(phys_addr_t) (mp->phys_addr +
+			(phys_addr_t) ((char *) obj - (char *) mp)))
+		return -1;
+
+	printf("put the object back\n");
+	rte_mempool_put(mp, obj);
+	rte_mempool_dump(stdout, mp);
+
+	printf("get 2 objects\n");
+	if (rte_mempool_get(mp, &obj) < 0)
+		return -1;
+	if (rte_mempool_get(mp, &obj2) < 0) {
+		rte_mempool_put(mp, obj);
+		return -1;
+	}
+	rte_mempool_dump(stdout, mp);
+
+	printf("put the objects back\n");
+	rte_mempool_put(mp, obj);
+	rte_mempool_put(mp, obj2);
+	rte_mempool_dump(stdout, mp);
+
+	/*
+	 * get many objects: we cannot get them all because the cache
+	 * on other cores may not be empty.
+	 */
+	objtable = malloc(MEMPOOL_SIZE * sizeof(void *));
+	if (objtable == NULL)
+		return -1;
+
+	for (i = 0; i < MEMPOOL_SIZE; i++) {
+		if (rte_mempool_get(mp, &objtable[i]) < 0)
+			break;
+	}
+
+	/*
+	 * for each object, check that its content was not modified,
+	 * and put objects back in pool
+	 */
+	while (i--) {
+		obj = objtable[i];
+		obj_data = obj;
+		objnum = obj;
+		if (*objnum > MEMPOOL_SIZE) {
+			printf("bad object number(%d)\n", *objnum);
+			ret = -1;
+			break;
+		}
+		for (j = sizeof(*objnum); j < mp->elt_size; j++) {
+			if (obj_data[j] != 0)
+				ret = -1;
+		}
+
+		rte_mempool_put(mp, objtable[i]);
+	}
+
+	free(objtable);
+	if (ret == -1)
+		printf("objects were modified!\n");
+
+	return ret;
+}
+
+static int test_mempool_creation_with_exceeded_cache_size(void)
+{
+	struct rte_mempool *mp_cov;
+
+	mp_cov = rte_mempool_create("test_mempool_creation_exceeded_cache_size",
+						MEMPOOL_SIZE,
+						MEMPOOL_ELT_SIZE,
+						RTE_MEMPOOL_CACHE_MAX_SIZE + 32,
+						0,
+						NULL, NULL,
+						my_obj_init, NULL,
+						SOCKET_ID_ANY, 0);
+	if (NULL != mp_cov)
+		return -1;
+
+	return 0;
+}
+
+/*
+ * it tests some more basic of mempool
+ */
+static int
+test_mempool_basic_ex(struct rte_mempool *mp)
+{
+	unsigned i;
+	void **obj;
+	void *err_obj;
+	int ret = -1;
+
+	if (mp == NULL)
+		return ret;
+
+	obj = rte_calloc("test_mempool_basic_ex",
+					MEMPOOL_SIZE , sizeof(void *), 0);
+	if (obj == NULL) {
+		printf("test_mempool_basic_ex fail to rte_malloc\n");
+		return ret;
+	}
+	printf("test_mempool_basic_ex now mempool (%s) has %u free entries\n",
+					mp->name, rte_mempool_free_count(mp));
+	if (rte_mempool_full(mp) != 1) {
+		printf("test_mempool_basic_ex the mempool should be full\n");
+		goto fail_mp_basic_ex;
+	}
+
+	for (i = 0; i < MEMPOOL_SIZE; i++) {
+		if (rte_mempool_mc_get(mp, &obj[i]) < 0) {
+			printf("test_mp_basic_ex fail to get object for [%u]\n",
+					i);
+			goto fail_mp_basic_ex;
+		}
+	}
+	if (rte_mempool_mc_get(mp, &err_obj) == 0) {
+		printf("test_mempool_basic_ex get an impossible obj\n");
+		goto fail_mp_basic_ex;
+	}
+	printf("number: %u\n", i);
+	if (rte_mempool_empty(mp) != 1) {
+		printf("test_mempool_basic_ex the mempool should be empty\n");
+		goto fail_mp_basic_ex;
+	}
+
+	for (i = 0; i < MEMPOOL_SIZE; i++)
+		rte_mempool_mp_put(mp, obj[i]);
+
+	if (rte_mempool_full(mp) != 1) {
+		printf("test_mempool_basic_ex the mempool should be full\n");
+		goto fail_mp_basic_ex;
+	}
+
+	ret = 0;
+
+fail_mp_basic_ex:
+	if (obj != NULL)
+		rte_free((void *)obj);
+
+	return ret;
+}
+
+static int
+test_mempool_same_name_twice_creation(void)
+{
+	struct rte_mempool *mp_tc;
+
+	mp_tc = rte_mempool_create("test_mempool_same_name_twice_creation",
+						MEMPOOL_SIZE,
+						MEMPOOL_ELT_SIZE, 0, 0,
+						NULL, NULL,
+						NULL, NULL,
+						SOCKET_ID_ANY, 0);
+	if (NULL == mp_tc)
+		return -1;
+
+	mp_tc = rte_mempool_create("test_mempool_same_name_twice_creation",
+						MEMPOOL_SIZE,
+						MEMPOOL_ELT_SIZE, 0, 0,
+						NULL, NULL,
+						NULL, NULL,
+						SOCKET_ID_ANY, 0);
+	if (NULL != mp_tc)
+		return -1;
+
+	return 0;
+}
+
+/*
+ * BAsic test for mempool_xmem functions.
+ */
+static int
+test_mempool_xmem_misc(void)
+{
+	uint32_t elt_num, total_size;
+	size_t sz;
+	ssize_t usz;
+
+	elt_num = MAX_KEEP;
+	total_size = rte_mempool_calc_obj_size(MEMPOOL_ELT_SIZE, 0, NULL);
+	sz = rte_mempool_xmem_size(elt_num, total_size, MEMPOOL_PG_SHIFT_MAX);
+
+	usz = rte_mempool_xmem_usage(NULL, elt_num, total_size, 0, 1,
+		MEMPOOL_PG_SHIFT_MAX);
+
+	if (sz != (size_t)usz)  {
+		printf("failure @ %s: rte_mempool_xmem_usage(%u, %u) "
+			"returns: %#zx, while expected: %#zx;\n",
+			__func__, elt_num, total_size, sz, (size_t)usz);
+		return (-1);
+	}
+
+	return 0;
+}
+
+
+
+static int
+test_ext_mempool(void)
+{
+	int16_t handler_idx;
+
+	rte_atomic32_init(&synchro);
+
+
+	handler_idx = rte_get_mempool_handler("ring_mp_mc");
+	if (handler_idx < 0) {
+		printf("could not find ring_mp_mc mempool manager\n");
+		return -1;
+	}
+
+	handler_idx = rte_get_mempool_handler("ring_sp_sc");
+	if (handler_idx < 0) {
+		printf("could not find ring_sp_sc mempool manager\n");
+		return -1;
+	}
+
+	handler_idx = rte_get_mempool_handler("ring_mp_sc");
+	if (handler_idx < 0) {
+		printf("could not find ring_mp_sc mempool manager\n");
+		return -1;
+	}
+
+	handler_idx = rte_get_mempool_handler("ring_sp_mc");
+	if (handler_idx < 0) {
+		printf("could not find ring_sp_mc mempool manager\n");
+		return -1;
+	}
+
+	handler_idx = rte_get_mempool_handler("stack");
+	if (handler_idx < 0) {
+		printf("could not find stack mempool manager\n");
+		return -1;
+	}
+
+	handler_idx = rte_get_mempool_handler("custom_handler");
+	if (handler_idx < 0) {
+		printf("could not find custom_handler mempool manager\n");
+		return -1;
+	}
+
+	/* create an external mempool (without cache) */
+	if (ext_nocache == NULL)
+		ext_nocache = rte_mempool_create_ext(
+				"ext_nocache",         /* Name */
+				MEMPOOL_SIZE,          /* Number of Elements */
+				MEMPOOL_ELT_SIZE,      /* Element size */
+				0,                     /* Cache Size */
+				0,                     /* Private Data size */
+				NULL, NULL, NULL, NULL,
+				0,                     /* socket_id */
+				0,                     /* flags */
+				"custom_handler"
+				);
+	if (ext_nocache == NULL)
+		return -1;
+
+	/* create an external mempool (with cache) */
+	if (ext_cache == NULL)
+		ext_cache = rte_mempool_create_ext(
+				"ext_cache",           /* Name */
+				MEMPOOL_SIZE,          /* Number of Elements */
+				MEMPOOL_ELT_SIZE,      /* Element size */
+				16,                    /* Cache Size */
+				0,                     /* Private Data size */
+				NULL, NULL, NULL, NULL,
+				0,                     /* socket_id */
+				0,                     /* flags */
+				"custom_handler"
+				);
+	if (ext_cache == NULL)
+		return -1;
+
+	/* retrieve the mempool from its name */
+	if (rte_mempool_lookup("ext_nocache") != ext_nocache) {
+		printf("Cannot lookup mempool from its name\n");
+		return -1;
+	}
+	/* retrieve the mempool from its name */
+	if (rte_mempool_lookup("ext_cache") != ext_cache) {
+		printf("Cannot lookup mempool from its name\n");
+		return -1;
+	}
+
+	rte_mempool_list_dump(stdout);
+
+	printf("Running basic tests\n");
+	/* basic tests without cache */
+	mp = ext_nocache;
+	if (test_mempool_basic() < 0)
+		return -1;
+
+	/* basic tests with cache */
+	mp = ext_cache;
+	if (test_mempool_basic() < 0)
+		return -1;
+
+	/* more basic tests without cache */
+	if (test_mempool_basic_ex(ext_nocache) < 0)
+		return -1;
+
+	if (test_mempool_creation_with_exceeded_cache_size() < 0)
+		return -1;
+
+	if (test_mempool_same_name_twice_creation() < 0)
+		return -1;
+	return 0;
+
+	if (test_mempool_xmem_misc() < 0)
+		return -1;
+
+	rte_mempool_list_dump(stdout);
+
+	return 0;
+}
+
+static struct test_command mempool_cmd = {
+	.command = "ext_mempool_autotest",
+	.callback = test_ext_mempool,
+};
+REGISTER_TEST_COMMAND(mempool_cmd);
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH 5/5] mempool: allow rte_pktmbuf_pool_create switch between memool handlers
  2016-01-26 17:25 [PATCH 0/5] add external mempool manager David Hunt
                   ` (3 preceding siblings ...)
  2016-01-26 17:25 ` [PATCH 4/5] mempool: add autotest for external mempool custom example David Hunt
@ 2016-01-26 17:25 ` David Hunt
  2016-02-05 10:11   ` Olivier MATZ
  2016-01-28 17:26 ` [PATCH 0/5] add external mempool manager Jerin Jacob
  2016-02-16 14:48 ` [PATCH 0/6] " David Hunt
  6 siblings, 1 reply; 238+ messages in thread
From: David Hunt @ 2016-01-26 17:25 UTC (permalink / raw)
  To: dev

if the user wants to have rte_pktmbuf_pool_create() use an external mempool
handler, they simply define MEMPOOL_HANDLER_NAME to be the name of the
mempool handler they wish to use. May move this to config

Signed-off-by: David Hunt <david.hunt@intel.com>
---
 lib/librte_mbuf/rte_mbuf.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index c18b438..362396e 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -167,10 +167,21 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	mbp_priv.mbuf_data_room_size = data_room_size;
 	mbp_priv.mbuf_priv_size = priv_size;
 
+/* #define MEMPOOL_HANDLER_NAME "custom_handler" */
+#undef MEMPOOL_HANDLER_NAME
+
+#ifndef MEMPOOL_HANDLER_NAME
 	return rte_mempool_create(name, n, elt_size,
 		cache_size, sizeof(struct rte_pktmbuf_pool_private),
 		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
 		socket_id, 0);
+#else
+	return rte_mempool_create_ext(name, n, elt_size,
+		cache_size, sizeof(struct rte_pktmbuf_pool_private),
+		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
+		socket_id, 0,
+		MEMPOOL_HANDLER_NAME);
+#endif
 }
 
 /* do some sanity checks on a mbuf: panic if it fails */
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* Re: [PATCH 0/5] add external mempool manager
  2016-01-26 17:25 [PATCH 0/5] add external mempool manager David Hunt
                   ` (4 preceding siblings ...)
  2016-01-26 17:25 ` [PATCH 5/5] mempool: allow rte_pktmbuf_pool_create switch between memool handlers David Hunt
@ 2016-01-28 17:26 ` Jerin Jacob
  2016-01-29 13:40   ` Hunt, David
  2016-02-16 14:48 ` [PATCH 0/6] " David Hunt
  6 siblings, 1 reply; 238+ messages in thread
From: Jerin Jacob @ 2016-01-28 17:26 UTC (permalink / raw)
  To: David Hunt; +Cc: dev

On Tue, Jan 26, 2016 at 05:25:50PM +0000, David Hunt wrote:
> Hi all on the list.
> 
> Here's a proposed patch for an external mempool manager
> 
> The External Mempool Manager is an extension to the mempool API that allows
> users to add and use an external mempool manager, which allows external memory
> subsystems such as external hardware memory management systems and software
> based memory allocators to be used with DPDK.

I like this approach.It will be useful for external hardware memory
pool managers.

BTW, Do you encounter any performance impact on changing to function
pointer based approach?

> 
> The existing API to the internal DPDK mempool manager will remain unchanged
> and will be backward compatible.
> 
> There are two aspects to external mempool manager.
>   1. Adding the code for your new mempool handler. This is achieved by adding a
>      new mempool handler source file into the librte_mempool library, and
>      using the REGISTER_MEMPOOL_HANDLER macro.
>   2. Using the new API to call rte_mempool_create_ext to create a new mempool
>      using the name parameter to identify which handler to use.
> 
> New API calls added
>  1. A new mempool 'create' function which accepts mempool handler name.
>  2. A new mempool 'rte_get_mempool_handler' function which accepts mempool
>     handler name, and returns the index to the relevant set of callbacks for
>     that mempool handler
> 
> Several external mempool managers may be used in the same application. A new
> mempool can then be created by using the new 'create' function, providing the
> mempool handler name to point the mempool to the relevant mempool manager
> callback structure.
> 
> The old 'create' function can still be called by legacy programs, and will
> internally work out the mempool handle based on the flags provided (single
> producer, single consumer, etc). By default handles are created internally to
> implement the built-in DPDK mempool manager and mempool types.
> 
> The external mempool manager needs to provide the following functions.
>  1. alloc     - allocates the mempool memory, and adds each object onto a ring
>  2. put       - puts an object back into the mempool once an application has
>                 finished with it
>  3. get       - gets an object from the mempool for use by the application
>  4. get_count - gets the number of available objects in the mempool
>  5. free      - frees the mempool memory
> 
> Every time a get/put/get_count is called from the application/PMD, the
> callback for that mempool is called. These functions are in the fastpath,
> and any unoptimised handlers may limit performance.
> 
> The new APIs are as follows:
> 
> 1. rte_mempool_create_ext
> 
> struct rte_mempool *
> rte_mempool_create_ext(const char * name, unsigned n,
>         unsigned cache_size, unsigned private_data_size,
>         int socket_id, unsigned flags,
>         const char * handler_name);
> 
> 2. rte_get_mempool_handler
> 
> int16_t
> rte_get_mempool_handler(const char *name);

Do we need above public API as, in any case we need rte_mempool* pointer to
operate on mempools(which has the index anyway)?

May a similar functional API with different name/return will be
better to figure out, given "name" registered or not in ethernet driver
which has dependency on a particular HW pool manager.

> 
> Please see rte_mempool.h for further information on the parameters.
> 
> 
> The important thing to note is that the mempool handler is passed by name
> to rte_mempool_create_ext, and that in turn calls rte_get_mempool_handler to
> get the handler index, which is stored in the rte_memool structure. This
> allow multiple processes to use the same mempool, as the function pointers
> are accessed via handler index.
> 
> The mempool handler structure contains callbacks to the implementation of
> the handler, and is set up for registration as follows:
> 
> static struct rte_mempool_handler handler_sp_mc = {
>     .name = "ring_sp_mc",
>     .alloc = rte_mempool_common_ring_alloc,
>     .put = common_ring_sp_put,
>     .get = common_ring_mc_get,
>     .get_count = common_ring_get_count,
>     .free = common_ring_free,
> };
> 
> And then the following macro will register the handler in the array of handlers
> 
> REGISTER_MEMPOOL_HANDLER(handler_mp_mc);
> 
> For and example of a simple malloc based mempool manager, see
> lib/librte_mempool/custom_mempool.c
> 
> For an example of API usage, please see app/test/test_ext_mempool.c, which
> implements a rudimentary mempool manager using simple mallocs for each
> mempool object (custom_mempool.c).
> 
> 
> David Hunt (5):
>   mempool: add external mempool manager support
>   memool: add stack (lifo) based external mempool handler
>   mempool: add custom external mempool handler example
>   mempool: add autotest for external mempool custom example
>   mempool: allow rte_pktmbuf_pool_create switch between memool handlers
> 
>  app/test/Makefile                         |   1 +
>  app/test/test_ext_mempool.c               | 470 ++++++++++++++++++++++++++++++
>  app/test/test_mempool_perf.c              |   2 -
>  lib/librte_mbuf/rte_mbuf.c                |  11 +
>  lib/librte_mempool/Makefile               |   3 +
>  lib/librte_mempool/custom_mempool.c       | 158 ++++++++++
>  lib/librte_mempool/rte_mempool.c          | 208 +++++++++----
>  lib/librte_mempool/rte_mempool.h          | 205 +++++++++++--
>  lib/librte_mempool/rte_mempool_default.c  | 229 +++++++++++++++
>  lib/librte_mempool/rte_mempool_internal.h |  70 +++++
>  lib/librte_mempool/rte_mempool_stack.c    | 162 ++++++++++
>  11 files changed, 1430 insertions(+), 89 deletions(-)
>  create mode 100644 app/test/test_ext_mempool.c
>  create mode 100644 lib/librte_mempool/custom_mempool.c
>  create mode 100644 lib/librte_mempool/rte_mempool_default.c
>  create mode 100644 lib/librte_mempool/rte_mempool_internal.h
>  create mode 100644 lib/librte_mempool/rte_mempool_stack.c
> 
> -- 
> 1.9.3
> 

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 1/5] mempool: add external mempool manager support
  2016-01-26 17:25 ` [PATCH 1/5] mempool: add external mempool manager support David Hunt
@ 2016-01-28 17:52   ` Jerin Jacob
  2016-02-03 14:16     ` Hunt, David
  2016-02-04 14:52   ` Olivier MATZ
  1 sibling, 1 reply; 238+ messages in thread
From: Jerin Jacob @ 2016-01-28 17:52 UTC (permalink / raw)
  To: David Hunt; +Cc: dev

On Tue, Jan 26, 2016 at 05:25:51PM +0000, David Hunt wrote:
> Adds the new rte_mempool_create_ext api and callback mechanism for
> external mempool handlers
> 
> Modifies the existing rte_mempool_create to set up the handler_idx to
> the relevant mempool handler based on the handler name:
> 	ring_sp_sc
> 	ring_mp_mc
> 	ring_sp_mc
> 	ring_mp_sc
> 
> Signed-off-by: David Hunt <david.hunt@intel.com>
> ---
>  app/test/test_mempool_perf.c              |   1 -
>  lib/librte_mempool/Makefile               |   1 +
>  lib/librte_mempool/rte_mempool.c          | 210 +++++++++++++++++++--------
>  lib/librte_mempool/rte_mempool.h          | 207 +++++++++++++++++++++++----
>  lib/librte_mempool/rte_mempool_default.c  | 229 ++++++++++++++++++++++++++++++
>  lib/librte_mempool/rte_mempool_internal.h |  74 ++++++++++
>  6 files changed, 634 insertions(+), 88 deletions(-)
>  create mode 100644 lib/librte_mempool/rte_mempool_default.c
>  create mode 100644 lib/librte_mempool/rte_mempool_internal.h
> 
> diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
> index cdc02a0..091c1df 100644
> --- a/app/test/test_mempool_perf.c
> +++ b/app/test/test_mempool_perf.c
> @@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
>  							   n_get_bulk);
>  				if (unlikely(ret < 0)) {
>  					rte_mempool_dump(stdout, mp);
> -					rte_ring_dump(stdout, mp->ring);
>  					/* in this case, objects are lost... */
>  					return -1;
>  				}
> diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
> index a6898ef..7c81ef6 100644
> --- a/lib/librte_mempool/Makefile
> +++ b/lib/librte_mempool/Makefile
> @@ -42,6 +42,7 @@ LIBABIVER := 1
>  
>  # all source are stored in SRCS-y
>  SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
> +SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
>  ifeq ($(CONFIG_RTE_LIBRTE_XEN_DOM0),y)
>  SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_dom0_mempool.c
>  endif
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index aff5f6d..8c01838 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -59,10 +59,11 @@
>  #include <rte_spinlock.h>
>  
>  #include "rte_mempool.h"
> +#include "rte_mempool_internal.h"
>  
>  TAILQ_HEAD(rte_mempool_list, rte_tailq_entry);
>  
> -static struct rte_tailq_elem rte_mempool_tailq = {
> +struct rte_tailq_elem rte_mempool_tailq = {
>  	.name = "RTE_MEMPOOL",
>  };
>  EAL_REGISTER_TAILQ(rte_mempool_tailq)
> @@ -149,7 +150,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, uint32_t obj_idx,
>  		obj_init(mp, obj_init_arg, obj, obj_idx);
>  
>  	/* enqueue in ring */
> -	rte_ring_sp_enqueue(mp->ring, obj);
> +	rte_mempool_ext_put_bulk(mp, &obj, 1);
>  }
>  
>  uint32_t
> @@ -375,48 +376,28 @@ rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz,
>  	return usz;
>  }
>  
> -#ifndef RTE_LIBRTE_XEN_DOM0
> -/* stub if DOM0 support not configured */
> -struct rte_mempool *
> -rte_dom0_mempool_create(const char *name __rte_unused,
> -			unsigned n __rte_unused,
> -			unsigned elt_size __rte_unused,
> -			unsigned cache_size __rte_unused,
> -			unsigned private_data_size __rte_unused,
> -			rte_mempool_ctor_t *mp_init __rte_unused,
> -			void *mp_init_arg __rte_unused,
> -			rte_mempool_obj_ctor_t *obj_init __rte_unused,
> -			void *obj_init_arg __rte_unused,
> -			int socket_id __rte_unused,
> -			unsigned flags __rte_unused)
> -{
> -	rte_errno = EINVAL;
> -	return NULL;
> -}
> -#endif
> -
>  /* create the mempool */
>  struct rte_mempool *
>  rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
> -		   unsigned cache_size, unsigned private_data_size,
> -		   rte_mempool_ctor_t *mp_init, void *mp_init_arg,
> -		   rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
> -		   int socket_id, unsigned flags)
> +			unsigned cache_size, unsigned private_data_size,
> +			rte_mempool_ctor_t *mp_init, void *mp_init_arg,
> +			rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
> +			int socket_id, unsigned flags)
>  {
>  	if (rte_xen_dom0_supported())
>  		return rte_dom0_mempool_create(name, n, elt_size,
> -					       cache_size, private_data_size,
> -					       mp_init, mp_init_arg,
> -					       obj_init, obj_init_arg,
> -					       socket_id, flags);
> +			cache_size, private_data_size,
> +			mp_init, mp_init_arg,
> +			obj_init, obj_init_arg,
> +			socket_id, flags);
>  	else
>  		return rte_mempool_xmem_create(name, n, elt_size,
> -					       cache_size, private_data_size,
> -					       mp_init, mp_init_arg,
> -					       obj_init, obj_init_arg,
> -					       socket_id, flags,
> -					       NULL, NULL, MEMPOOL_PG_NUM_DEFAULT,
> -					       MEMPOOL_PG_SHIFT_MAX);
> +			cache_size, private_data_size,
> +			mp_init, mp_init_arg,
> +			obj_init, obj_init_arg,
> +			socket_id, flags,
> +			NULL, NULL,
> +			MEMPOOL_PG_NUM_DEFAULT, MEMPOOL_PG_SHIFT_MAX);
>  }
>  
>  /*
> @@ -435,11 +416,9 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
>  		const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift)
>  {
>  	char mz_name[RTE_MEMZONE_NAMESIZE];
> -	char rg_name[RTE_RING_NAMESIZE];
>  	struct rte_mempool_list *mempool_list;
>  	struct rte_mempool *mp = NULL;
>  	struct rte_tailq_entry *te;
> -	struct rte_ring *r;
>  	const struct rte_memzone *mz;
>  	size_t mempool_size;
>  	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
> @@ -469,7 +448,7 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
>  
>  	/* asked cache too big */
>  	if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE ||
> -	    CALC_CACHE_FLUSHTHRESH(cache_size) > n) {
> +		CALC_CACHE_FLUSHTHRESH(cache_size) > n) {
>  		rte_errno = EINVAL;
>  		return NULL;
>  	}
> @@ -502,16 +481,8 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
>  		return NULL;
>  	}
>  
> -	rte_rwlock_write_lock(RTE_EAL_MEMPOOL_RWLOCK);
>  
> -	/* allocate the ring that will be used to store objects */
> -	/* Ring functions will return appropriate errors if we are
> -	 * running as a secondary process etc., so no checks made
> -	 * in this function for that condition */
> -	snprintf(rg_name, sizeof(rg_name), RTE_MEMPOOL_MZ_FORMAT, name);
> -	r = rte_ring_create(rg_name, rte_align32pow2(n+1), socket_id, rg_flags);
> -	if (r == NULL)
> -		goto exit;
> +	rte_rwlock_write_lock(RTE_EAL_MEMPOOL_RWLOCK);
>  
>  	/*
>  	 * reserve a memory zone for this mempool: private data is
> @@ -588,7 +559,6 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
>  	memset(mp, 0, sizeof(*mp));
>  	snprintf(mp->name, sizeof(mp->name), "%s", name);
>  	mp->phys_addr = mz->phys_addr;
> -	mp->ring = r;
>  	mp->size = n;
>  	mp->flags = flags;
>  	mp->elt_size = objsz.elt_size;
> @@ -598,6 +568,22 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
>  	mp->cache_flushthresh = CALC_CACHE_FLUSHTHRESH(cache_size);
>  	mp->private_data_size = private_data_size;
>  
> +	/*
> +	 * Since we have 4 combinations of the SP/SC/MP/MC, and stack,
> +	 * examine the
> +	 * flags to set the correct index into the handler table.
> +	 */
> +	if (flags & MEMPOOL_F_USE_STACK)
> +		mp->handler_idx = rte_get_mempool_handler("stack");
> +	else if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> +		mp->handler_idx = rte_get_mempool_handler("ring_sp_sc");
> +	else if (flags & MEMPOOL_F_SP_PUT)
> +		mp->handler_idx = rte_get_mempool_handler("ring_sp_mc");
> +	else if (flags & MEMPOOL_F_SC_GET)
> +		mp->handler_idx = rte_get_mempool_handler("ring_mp_sc");
> +	else
> +		mp->handler_idx = rte_get_mempool_handler("ring_mp_mc");
> +

Why still use flag based selection? Why not "name" based? See below
for more description


>  	/* calculate address of the first element for continuous mempool. */
>  	obj = (char *)mp + MEMPOOL_HEADER_SIZE(mp, pg_num) +
>  		private_data_size;
> @@ -613,7 +599,6 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
>  		mp->elt_va_start = (uintptr_t)obj;
>  		mp->elt_pa[0] = mp->phys_addr +
>  			(mp->elt_va_start - (uintptr_t)mp);
> -
>  	/* mempool elements in a separate chunk of memory. */
>  	} else {
>  		mp->elt_va_start = (uintptr_t)vaddr;
> @@ -622,6 +607,10 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
>  
>  	mp->elt_va_end = mp->elt_va_start;
>  
> +	/* Parameters are setup. Call the mempool handler alloc */
> +	if ((rte_mempool_ext_alloc(mp, name, n, socket_id, flags)) == NULL)
> +		goto exit;
> +
>  	/* call the initializer */
>  	if (mp_init)
>  		mp_init(mp, mp_init_arg);
> @@ -646,7 +635,7 @@ rte_mempool_count(const struct rte_mempool *mp)
>  {
>  	unsigned count;
>  
> -	count = rte_ring_count(mp->ring);
> +	count = rte_mempool_ext_get_count(mp);
>  
>  #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>  	{
> @@ -681,7 +670,9 @@ rte_mempool_dump_cache(FILE *f, const struct rte_mempool *mp)
>  	fprintf(f, "    cache_size=%"PRIu32"\n", mp->cache_size);
>  	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
>  		cache_count = mp->local_cache[lcore_id].len;
> -		fprintf(f, "    cache_count[%u]=%u\n", lcore_id, cache_count);
> +		if (cache_count > 0)
> +			fprintf(f, "    cache_count[%u]=%u\n",
> +						lcore_id, cache_count);
>  		count += cache_count;
>  	}
>  	fprintf(f, "    total_cache_count=%u\n", count);
> @@ -802,14 +793,13 @@ rte_mempool_dump(FILE *f, const struct rte_mempool *mp)
>  
>  	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
>  	fprintf(f, "  flags=%x\n", mp->flags);
> -	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
>  	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->phys_addr);
>  	fprintf(f, "  size=%"PRIu32"\n", mp->size);
>  	fprintf(f, "  header_size=%"PRIu32"\n", mp->header_size);
>  	fprintf(f, "  elt_size=%"PRIu32"\n", mp->elt_size);
>  	fprintf(f, "  trailer_size=%"PRIu32"\n", mp->trailer_size);
>  	fprintf(f, "  total_obj_size=%"PRIu32"\n",
> -	       mp->header_size + mp->elt_size + mp->trailer_size);
> +		   mp->header_size + mp->elt_size + mp->trailer_size);
>  
>  	fprintf(f, "  private_data_size=%"PRIu32"\n", mp->private_data_size);
>  	fprintf(f, "  pg_num=%"PRIu32"\n", mp->pg_num);
> @@ -825,7 +815,7 @@ rte_mempool_dump(FILE *f, const struct rte_mempool *mp)
>  			mp->size);
>  
>  	cache_count = rte_mempool_dump_cache(f, mp);
> -	common_count = rte_ring_count(mp->ring);
> +	common_count = /* rte_ring_count(mp->ring)*/0;
>  	if ((cache_count + common_count) > mp->size)
>  		common_count = mp->size - cache_count;
>  	fprintf(f, "  common_pool_count=%u\n", common_count);
> @@ -904,7 +894,7 @@ rte_mempool_lookup(const char *name)
>  }
>  
>  void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *),
> -		      void *arg)
> +			  void *arg)
>  {
>  	struct rte_tailq_entry *te = NULL;
>  	struct rte_mempool_list *mempool_list;
> @@ -919,3 +909,111 @@ void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *),
>  
>  	rte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK);
>  }
> +
> +
> +/* create the mempool using and external mempool manager */
> +struct rte_mempool *
> +rte_mempool_create_ext(const char *name, unsigned n, unsigned elt_size,
> +			unsigned cache_size, unsigned private_data_size,
> +			rte_mempool_ctor_t *mp_init, void *mp_init_arg,
> +			rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
> +			int socket_id, unsigned flags,
> +			const char *handler_name)
> +{
> +	char mz_name[RTE_MEMZONE_NAMESIZE];
> +	struct rte_mempool_list *mempool_list;
> +	struct rte_mempool *mp = NULL;
> +	struct rte_tailq_entry *te;
> +	const struct rte_memzone *mz;
> +	size_t mempool_size;
> +	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
> +	int rg_flags = 0;
> +	int16_t handler_idx;
> +
> +	mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list);
> +
> +	/* asked cache too big */
> +	if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE ||
> +		CALC_CACHE_FLUSHTHRESH(cache_size) > n) {
> +		rte_errno = EINVAL;
> +		return NULL;
> +	}
> +
> +	handler_idx = rte_get_mempool_handler(handler_name);
> +	if (handler_idx < 0) {
> +		RTE_LOG(ERR, MEMPOOL, "Cannot find mempool handler by name!\n");
> +		goto exit;
> +	}
> +
> +	/* ring flags */
> +	if (flags & MEMPOOL_F_SP_PUT)
> +		rg_flags |= RING_F_SP_ENQ;
> +	if (flags & MEMPOOL_F_SC_GET)
> +		rg_flags |= RING_F_SC_DEQ;
> +

rg_flags  not used anywhere down

> +	rte_rwlock_write_lock(RTE_EAL_MEMPOOL_RWLOCK);
> +
> +	/*
> +	 * reserve a memory zone for this mempool: private data is
> +	 * cache-aligned
> +	 */
> +	private_data_size = RTE_ALIGN_CEIL(private_data_size,
> +							RTE_MEMPOOL_ALIGN);
> +
> +	/* try to allocate tailq entry */
> +	te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0);
> +	if (te == NULL) {
> +		RTE_LOG(ERR, MEMPOOL, "Cannot allocate tailq entry!\n");
> +		goto exit;
> +	}
> +
> +	/*
> +	 * If user provided an external memory buffer, then use it to
> +	 * store mempool objects. Otherwise reserve a memzone that is large
> +	 * enough to hold mempool header and metadata plus mempool objects.
> +	 */
> +	mempool_size = sizeof(*mp) + private_data_size;
> +	mempool_size = RTE_ALIGN_CEIL(mempool_size, RTE_MEMPOOL_ALIGN);
> +
> +	snprintf(mz_name, sizeof(mz_name), RTE_MEMPOOL_MZ_FORMAT, name);
> +
> +	mz = rte_memzone_reserve(mz_name, mempool_size, socket_id, mz_flags);
> +
> +	/* no more memory */
> +	if (mz == NULL) {
> +		rte_free(te);
> +		goto exit;
> +	}
> +
> +	/* init the mempool structure */
> +	mp = mz->addr;
> +	memset(mp, 0, sizeof(*mp));
> +	snprintf(mp->name, sizeof(mp->name), "%s", name);
> +	mp->phys_addr = mz->phys_addr;
> +	mp->size = n;
> +	mp->flags = flags;
> +	mp->cache_size = cache_size;
> +	mp->cache_flushthresh = CALC_CACHE_FLUSHTHRESH(cache_size);
> +	mp->private_data_size = private_data_size;
> +	mp->handler_idx = handler_idx;
> +	mp->elt_size = elt_size;
> +	mp->rt_pool = rte_mempool_ext_alloc(mp, name, n, socket_id, flags);


IMO, We can avoid the duplicaition of above code with rte_mempool_create.
i.e  rte_mempool_create -> rte_mempool_create_ext(..,"ring_mp_mc")

> +
> +	/* call the initializer */
> +	if (mp_init)
> +		mp_init(mp, mp_init_arg);
> +
> +	mempool_populate(mp, n, 1, obj_init, obj_init_arg);
> +
> +	te->data = (void *) mp;
> +
> +	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
> +	TAILQ_INSERT_TAIL(mempool_list, te, next);
> +	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
> +
> +exit:
> +	rte_rwlock_write_unlock(RTE_EAL_MEMPOOL_RWLOCK);
> +
> +	return mp;
> +
> +}
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index 6e2390a..620cfb7 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -88,6 +88,8 @@ extern "C" {
>  struct rte_mempool_debug_stats {
>  	uint64_t put_bulk;         /**< Number of puts. */
>  	uint64_t put_objs;         /**< Number of objects successfully put. */
> +	uint64_t put_pool_bulk;    /**< Number of puts into pool. */
> +	uint64_t put_pool_objs;    /**< Number of objects into pool. */
>  	uint64_t get_success_bulk; /**< Successful allocation number. */
>  	uint64_t get_success_objs; /**< Objects successfully allocated. */
>  	uint64_t get_fail_bulk;    /**< Failed allocation number. */
> @@ -123,6 +125,7 @@ struct rte_mempool_objsz {
>  #define RTE_MEMPOOL_NAMESIZE 32 /**< Maximum length of a memory pool. */
>  #define RTE_MEMPOOL_MZ_PREFIX "MP_"
>  
> +
>  /* "MP_<name>" */
>  #define	RTE_MEMPOOL_MZ_FORMAT	RTE_MEMPOOL_MZ_PREFIX "%s"
>  
> @@ -175,12 +178,85 @@ struct rte_mempool_objtlr {
>  #endif
>  };
>  
> +/* Handler functions for external mempool support */
> +typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp,
> +		const char *name, unsigned n, int socket_id, unsigned flags);
> +typedef int (*rte_mempool_put_t)(void *p,
> +		void * const *obj_table, unsigned n);
> +typedef int (*rte_mempool_get_t)(void *p, void **obj_table,
> +		unsigned n);
> +typedef unsigned (*rte_mempool_get_count)(void *p);
> +typedef int(*rte_mempool_free_t)(struct rte_mempool *mp);
> +
> +/**
> + * @internal wrapper for external mempool manager alloc callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @param name
> + *   Name of the statistics field to increment in the memory pool.
> + * @param n
> + *   Number to add to the object-oriented statistics.
> + * @param socket_id
> + *   socket id on which to allocate.
> + * @param flags
> + *   general flags to allocate function
> + */
> +void *
> +rte_mempool_ext_alloc(struct rte_mempool *mp,
> +		const char *name, unsigned n, int socket_id, unsigned flags);
> +
> +/**
> + * @internal wrapper for external mempool manager get callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @param obj_table
> + *   Pointer to a table of void * pointers (objects).
> + * @param n
> + *	 Number of objects to get
> + */
> +int
> +rte_mempool_ext_get_bulk(struct rte_mempool *mp, void **obj_table,
> +		unsigned n);
> +
> +/**
> + * @internal wrapper for external mempool manager put callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @param obj_table
> + *   Pointer to a table of void * pointers (objects).
> + * @param n
> + *   Number of objects to put
> + */
> +int
> +rte_mempool_ext_put_bulk(struct rte_mempool *mp, void * const *obj_table,
> +		unsigned n);
> +
> +/**
> + * @internal wrapper for external mempool manager get_count callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + */
> +int
> +rte_mempool_ext_get_count(const struct rte_mempool *mp);
> +
> +/**
> + * @internal wrapper for external mempool manager free callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + */
> +int
> +rte_mempool_ext_free(struct rte_mempool *mp);
> +
>  /**
>   * The RTE mempool structure.
>   */
>  struct rte_mempool {
>  	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
> -	struct rte_ring *ring;           /**< Ring to store objects. */
>  	phys_addr_t phys_addr;           /**< Phys. addr. of mempool struct. */
>  	int flags;                       /**< Flags of the mempool. */
>  	uint32_t size;                   /**< Size of the mempool. */
> @@ -194,6 +270,11 @@ struct rte_mempool {
>  
>  	unsigned private_data_size;      /**< Size of private data. */
>  
> +	/* Common pool data structure pointer */
> +	void *rt_pool __rte_cache_aligned;

Do we need to split rt_pool to next cache line, "cache_size"
variable, etc are used in fast path, and one more cache line will occupy
for this change

> +
> +	int16_t handler_idx;
> +
>  #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>  	/** Per-lcore local cache. */
>  	struct rte_mempool_cache local_cache[RTE_MAX_LCORE];
> @@ -223,6 +304,10 @@ struct rte_mempool {
>  #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
>  #define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
>  #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
> +#define MEMPOOL_F_USE_STACK      0x0010 /**< Use a stack for the common pool. */
> +#define MEMPOOL_F_USE_TM         0x0020
> +#define MEMPOOL_F_NO_SECONDARY   0x0040
> +
>  
>  /**
>   * @internal When debug is enabled, store some statistics.
> @@ -728,7 +813,6 @@ rte_dom0_mempool_create(const char *name, unsigned n, unsigned elt_size,
>  		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
>  		int socket_id, unsigned flags);
>  
> -
>  /**
>   * Dump the status of the mempool to the console.
>   *
> @@ -753,7 +837,7 @@ void rte_mempool_dump(FILE *f, const struct rte_mempool *mp);
>   */
>  static inline void __attribute__((always_inline))
>  __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
> -		    unsigned n, int is_mp)
> +		    unsigned n, __attribute__((unused)) int is_mp)
>  {
>  #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>  	struct rte_mempool_cache *cache;
> @@ -769,8 +853,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>  
>  #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>  	/* cache is not enabled or single producer or non-EAL thread */
> -	if (unlikely(cache_size == 0 || is_mp == 0 ||
> -		     lcore_id >= RTE_MAX_LCORE))
> +	if (unlikely(cache_size == 0 || lcore_id >= RTE_MAX_LCORE))
>  		goto ring_enqueue;
>  
>  	/* Go straight to ring if put would overflow mem allocated for cache */
> @@ -793,8 +876,8 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>  
>  	cache->len += n;
>  
> -	if (cache->len >= flushthresh) {
> -		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
> +	if (unlikely(cache->len >= flushthresh)) {
> +		rte_mempool_ext_put_bulk(mp, &cache->objs[cache_size],
>  				cache->len - cache_size);
>  		cache->len = cache_size;
>  	}
> @@ -804,22 +887,10 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>  ring_enqueue:
>  #endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
>  
> -	/* push remaining objects in ring */
> -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> -	if (is_mp) {
> -		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
> -			rte_panic("cannot put objects in mempool\n");
> -	}
> -	else {
> -		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
> -			rte_panic("cannot put objects in mempool\n");
> -	}
> -#else
> -	if (is_mp)
> -		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
> -	else
> -		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
> -#endif
> +	/* Increment stats counter to tell us how many pool puts happened */
> +	__MEMPOOL_STAT_ADD(mp, put_pool, n);
> +
> +	rte_mempool_ext_put_bulk(mp, obj_table, n);
>  }
>  
>  
> @@ -943,7 +1014,7 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
>   */
>  static inline int __attribute__((always_inline))
>  __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
> -		   unsigned n, int is_mc)
> +		   unsigned n, __attribute__((unused))int is_mc)
>  {
>  	int ret;
>  #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
> @@ -954,8 +1025,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>  	uint32_t cache_size = mp->cache_size;
>  
>  	/* cache is not enabled or single consumer */
> -	if (unlikely(cache_size == 0 || is_mc == 0 ||
> -		     n >= cache_size || lcore_id >= RTE_MAX_LCORE))
> +	if (unlikely(cache_size == 0 || n >= cache_size ||
> +						lcore_id >= RTE_MAX_LCORE))
>  		goto ring_dequeue;
>  
>  	cache = &mp->local_cache[lcore_id];
> @@ -967,7 +1038,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>  		uint32_t req = n + (cache_size - cache->len);
>  
>  		/* How many do we require i.e. number to fill the cache + the request */
> -		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
> +		ret = rte_mempool_ext_get_bulk(mp,
> +						&cache->objs[cache->len], req);
>  		if (unlikely(ret < 0)) {
>  			/*
>  			 * In the offchance that we are buffer constrained,
> @@ -995,10 +1067,7 @@ ring_dequeue:
>  #endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
>  
>  	/* get remaining objects from ring */
> -	if (is_mc)
> -		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
> -	else
> -		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
> +	ret = rte_mempool_ext_get_bulk(mp, obj_table, n);
>  
>  	if (ret < 0)
>  		__MEMPOOL_STAT_ADD(mp, get_fail, n);
> @@ -1401,6 +1470,82 @@ ssize_t rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz,
>  void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *arg),
>  		      void *arg);
>  
> +/**
> + * Function to get an index to an external mempool manager
> + *
> + * @param name
> + *   The name of the mempool handler to search for in the list of handlers
> + * @return
> + *   The index of the mempool handler in the list of registered mempool
> + *   handlers
> + */
> +int16_t
> +rte_get_mempool_handler(const char *name);
> +
> +
> +/**
> + * Create a new mempool named *name* in memory.
> + *
> + * This function uses an externally defined alloc callback to allocate memory.
> + * Its size is set to n elements.
> + * All elements of the mempool are allocated separately to the mempool header.
> + *
> + * @param name
> + *   The name of the mempool.
> + * @param n
> + *   The number of elements in the mempool. The optimum size (in terms of
> + *   memory usage) for a mempool is when n is a power of two minus one:
> + *   n = (2^q - 1).
> + * @param cache_size
> + *   If cache_size is non-zero, the rte_mempool library will try to
> + *   limit the accesses to the common lockless pool, by maintaining a
> + *   per-lcore object cache. This argument must be lower or equal to
> + *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5. It is advised to choose
> + *   cache_size to have "n modulo cache_size == 0": if this is
> + *   not the case, some elements will always stay in the pool and will
> + *   never be used. The access to the per-lcore table is of course
> + *   faster than the multi-producer/consumer pool. The cache can be
> + *   disabled if the cache_size argument is set to 0; it can be useful to
> + *   avoid losing objects in cache. Note that even if not used, the
> + *   memory space for cache is always reserved in a mempool structure,
> + *   except if CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE is set to 0.
> + * @param private_data_size
> + *   The size of the private data appended after the mempool
> + *   structure. This is useful for storing some private data after the
> + *   mempool structure, as is done for rte_mbuf_pool for example.
> + * @param mp_init
> + *   A function pointer that is called for initialization of the pool,
> + *   before object initialization. The user can initialize the private
> + *   data in this function if needed. This parameter can be NULL if
> + *   not needed.
> + * @param mp_init_arg
> + *   An opaque pointer to data that can be used in the mempool
> + *   constructor function.
> + * @param obj_init
> + *   A function pointer that is called for each object at
> + *   initialization of the pool. The user can set some meta data in
> + *   objects if needed. This parameter can be NULL if not needed.
> + *   The obj_init() function takes the mempool pointer, the init_arg,
> + *   the object pointer and the object number as parameters.
> + * @param obj_init_arg
> + *   An opaque pointer to data that can be used as an argument for
> + *   each call to the object constructor function.
> + * @param socket_id
> + *   The *socket_id* argument is the socket identifier in the case of
> + *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
> + *   constraint for the reserved zone.
> + * @param flags
> + * @return
> + *   The pointer to the new allocated mempool, on success. NULL on error
> + */
> +struct rte_mempool *
> +rte_mempool_create_ext(const char *name, unsigned n, unsigned elt_size,
> +		unsigned cache_size, unsigned private_data_size,
> +		rte_mempool_ctor_t *mp_init, void *mp_init_arg,
> +		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
> +		int socket_id, unsigned flags,
> +		const char *handler_name);
> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/lib/librte_mempool/rte_mempool_default.c b/lib/librte_mempool/rte_mempool_default.c
> new file mode 100644
> index 0000000..2493dc1
> --- /dev/null
> +++ b/lib/librte_mempool/rte_mempool_default.c
> @@ -0,0 +1,229 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
> + *   All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#include <stdio.h>
> +#include <rte_mempool.h>
> +#include <rte_malloc.h>
> +#include <string.h>
> +
> +#include "rte_mempool_internal.h"
> +
> +/*
> + * Indirect jump table to support external memory pools
> + */
> +struct rte_mempool_handler_list mempool_handler_list = {
> +	.sl =  RTE_SPINLOCK_INITIALIZER ,
> +	.num_handlers = 0
> +};
> +
> +/* TODO Convert to older mechanism of an array of stucts */
> +int16_t
> +add_handler(struct rte_mempool_handler *h)
> +{
> +	int16_t handler_idx;
> +
> +	/*  */
> +	rte_spinlock_lock(&mempool_handler_list.sl);
> +
> +	/* Check whether jump table has space */
> +	if (mempool_handler_list.num_handlers >= RTE_MEMPOOL_MAX_HANDLER_IDX) {
> +		rte_spinlock_unlock(&mempool_handler_list.sl);
> +		RTE_LOG(ERR, MEMPOOL,
> +				"Maximum number of mempool handlers exceeded\n");
> +		return -1;
> +	}
> +
> +	if ((h->put == NULL) || (h->get == NULL) ||
> +		(h->get_count == NULL)) {
> +		rte_spinlock_unlock(&mempool_handler_list.sl);
> +		 RTE_LOG(ERR, MEMPOOL,
> +					"Missing callback while registering mempool handler\n");
> +		return -1;
> +	}
> +
> +	/* add new handler index */
> +	handler_idx = mempool_handler_list.num_handlers++;
> +
> +	snprintf(mempool_handler_list.handler[handler_idx].name,
> +				RTE_MEMPOOL_NAMESIZE, "%s", h->name);
> +	mempool_handler_list.handler[handler_idx].alloc = h->alloc;
> +	mempool_handler_list.handler[handler_idx].put = h->put;
> +	mempool_handler_list.handler[handler_idx].get = h->get;
> +	mempool_handler_list.handler[handler_idx].get_count = h->get_count;
> +
> +	rte_spinlock_unlock(&mempool_handler_list.sl);
> +
> +	return handler_idx;
> +}
> +
> +/* TODO Convert to older mechanism of an array of stucts */
> +int16_t
> +rte_get_mempool_handler(const char *name)
> +{
> +	int16_t i;
> +
> +	for (i = 0; i < mempool_handler_list.num_handlers; i++) {
> +		if (!strcmp(name, mempool_handler_list.handler[i].name))
> +			return i;
> +	}
> +	return -1;
> +}
> +
> +static int
> +common_ring_mp_put(void *p, void * const *obj_table, unsigned n)
> +{
> +	return rte_ring_mp_enqueue_bulk((struct rte_ring *)p, obj_table, n);
> +}
> +
> +static int
> +common_ring_sp_put(void *p, void * const *obj_table, unsigned n)
> +{
> +	return rte_ring_sp_enqueue_bulk((struct rte_ring *)p, obj_table, n);
> +}
> +
> +static int
> +common_ring_mc_get(void *p, void **obj_table, unsigned n)
> +{
> +	return rte_ring_mc_dequeue_bulk((struct rte_ring *)p, obj_table, n);
> +}
> +
> +static int
> +common_ring_sc_get(void *p, void **obj_table, unsigned n)
> +{
> +	return rte_ring_sc_dequeue_bulk((struct rte_ring *)p, obj_table, n);
> +}
> +
> +static unsigned
> +common_ring_get_count(void *p)
> +{
> +	return rte_ring_count((struct rte_ring *)p);
> +}
> +
> +
> +static void *
> +rte_mempool_common_ring_alloc(struct rte_mempool *mp,
> +		const char *name, unsigned n, int socket_id, unsigned flags)
> +{
> +	struct rte_ring *r;
> +	char rg_name[RTE_RING_NAMESIZE];
> +	int rg_flags = 0;
> +
> +	if (flags & MEMPOOL_F_SP_PUT)
> +		rg_flags |= RING_F_SP_ENQ;
> +	if (flags & MEMPOOL_F_SC_GET)
> +		rg_flags |= RING_F_SC_DEQ;
> +
> +	/* allocate the ring that will be used to store objects */
> +	/* Ring functions will return appropriate errors if we are
> +	 * running as a secondary process etc., so no checks made
> +	 * in this function for that condition */
> +	snprintf(rg_name, sizeof(rg_name), "%s-ring", name);
> +	r = rte_ring_create(rg_name, rte_align32pow2(n+1), socket_id, rg_flags);
> +	if (r == NULL)
> +		return NULL;
> +
> +	mp->rt_pool = (void *)r;
> +
> +	return (void *) r;
> +}
> +
> +void *
> +rte_mempool_ext_alloc(struct rte_mempool *mp,
> +		const char *name, unsigned n, int socket_id, unsigned flags)
> +{
> +	if (mempool_handler_list.handler[mp->handler_idx].alloc) {
> +		return (mempool_handler_list.handler[mp->handler_idx].alloc)
> +						(mp, name, n, socket_id, flags);
> +	}
> +	return NULL;
> +}
> +
> +inline int __attribute__((always_inline))
> +rte_mempool_ext_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
> +{
> +	return (mempool_handler_list.handler[mp->handler_idx].get)
> +						(mp->rt_pool, obj_table, n);
> +}
> +
> +inline int __attribute__((always_inline))
> +rte_mempool_ext_put_bulk(struct rte_mempool *mp, void * const *obj_table,
> +		unsigned n)
> +{
> +	return (mempool_handler_list.handler[mp->handler_idx].put)
> +						(mp->rt_pool, obj_table, n);
> +}
> +
> +int
> +rte_mempool_ext_get_count(const struct rte_mempool *mp)
> +{
> +	return (mempool_handler_list.handler[mp->handler_idx].get_count)
> +						(mp->rt_pool);
> +}
> +
> +static struct rte_mempool_handler handler_mp_mc = {
> +	.name = "ring_mp_mc",
> +	.alloc = rte_mempool_common_ring_alloc,
> +	.put = common_ring_mp_put,
> +	.get = common_ring_mc_get,
> +	.get_count = common_ring_get_count,
> +	.free = NULL
> +};
> +static struct rte_mempool_handler handler_sp_sc = {
> +	.name = "ring_sp_sc",
> +	.alloc = rte_mempool_common_ring_alloc,
> +	.put = common_ring_sp_put,
> +	.get = common_ring_sc_get,
> +	.get_count = common_ring_get_count,
> +	.free = NULL
> +};
> +static struct rte_mempool_handler handler_mp_sc = {
> +	.name = "ring_mp_sc",
> +	.alloc = rte_mempool_common_ring_alloc,
> +	.put = common_ring_mp_put,
> +	.get = common_ring_sc_get,
> +	.get_count = common_ring_get_count,
> +	.free = NULL
> +};
> +static struct rte_mempool_handler handler_sp_mc = {
> +	.name = "ring_sp_mc",
> +	.alloc = rte_mempool_common_ring_alloc,
> +	.put = common_ring_sp_put,
> +	.get = common_ring_mc_get,
> +	.get_count = common_ring_get_count,
> +	.free = NULL
> +};
> +
> +REGISTER_MEMPOOL_HANDLER(handler_mp_mc);
> +REGISTER_MEMPOOL_HANDLER(handler_sp_sc);
> +REGISTER_MEMPOOL_HANDLER(handler_mp_sc);
> +REGISTER_MEMPOOL_HANDLER(handler_sp_mc);
> diff --git a/lib/librte_mempool/rte_mempool_internal.h b/lib/librte_mempool/rte_mempool_internal.h
> new file mode 100644
> index 0000000..92b7bde
> --- /dev/null
> +++ b/lib/librte_mempool/rte_mempool_internal.h
> @@ -0,0 +1,74 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
> + *   All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef _RTE_MEMPOOL_INTERNAL_H_
> +#define _RTE_MEMPOOL_INTERNAL_H_
> +
> +#include <rte_spinlock.h>
> +#include <rte_mempool.h>
> +
> +#define RTE_MEMPOOL_MAX_HANDLER_IDX 16
> +
> +struct rte_mempool_handler {
> +	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool handler */
> +
> +	rte_mempool_alloc_t alloc;
> +
> +	rte_mempool_put_t put __rte_cache_aligned;
> +
> +	rte_mempool_get_t get __rte_cache_aligned;
> +
> +	rte_mempool_get_count get_count __rte_cache_aligned;
> +
> +	rte_mempool_free_t free __rte_cache_aligned;
> +};

IMO, The structure should cache aligned not the individual
elements as elements are likely read only in fast path.

> +
> +struct rte_mempool_handler_list {
> +	rte_spinlock_t sl;		  /**< Spinlock for add/delete. */
> +
> +	int32_t num_handlers;	  /**< Number of handlers that are valid. */
> +
> +	/* storage for all possible handlers */
> +	struct rte_mempool_handler handler[RTE_MEMPOOL_MAX_HANDLER_IDX];
> +};
> +
> +int16_t add_handler(struct rte_mempool_handler *h);
> +
> +#define REGISTER_MEMPOOL_HANDLER(h) \
> +static int16_t __attribute__((used)) testfn_##h(void);\
> +int16_t __attribute__((constructor, used)) testfn_##h(void)\
> +{\
> +	return add_handler(&h);\
> +}
> +
> +#endif
> -- 
> 1.9.3
> 

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 3/5] mempool: add custom external mempool handler example
  2016-01-26 17:25 ` [PATCH 3/5] mempool: add custom external mempool handler example David Hunt
@ 2016-01-28 17:54   ` Jerin Jacob
  0 siblings, 0 replies; 238+ messages in thread
From: Jerin Jacob @ 2016-01-28 17:54 UTC (permalink / raw)
  To: David Hunt; +Cc: dev

On Tue, Jan 26, 2016 at 05:25:53PM +0000, David Hunt wrote:
> adds a simple ring-based mempool handler using mallocs for each object

nit,

$ git am /export/dh/3
Applying: mempool: add custom external mempool handler example
/export/dpdk-master/.git/rebase-apply/patch:184: new blank line at EOF.
+
warning: 1 line adds whitespace errors.

> 
> Signed-off-by: David Hunt <david.hunt@intel.com>
> ---
>  lib/librte_mempool/Makefile         |   1 +
>  lib/librte_mempool/custom_mempool.c | 160 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 161 insertions(+)
>  create mode 100644 lib/librte_mempool/custom_mempool.c
> 
> diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
> index d795b48..4f72546 100644
> --- a/lib/librte_mempool/Makefile
> +++ b/lib/librte_mempool/Makefile
> @@ -44,6 +44,7 @@ LIBABIVER := 1
>  SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
>  SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
>  SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_stack.c
> +SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  custom_mempool.c
>  ifeq ($(CONFIG_RTE_LIBRTE_XEN_DOM0),y)
>  SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_dom0_mempool.c
>  endif
> diff --git a/lib/librte_mempool/custom_mempool.c b/lib/librte_mempool/custom_mempool.c
> new file mode 100644
> index 0000000..a9da8c5
> --- /dev/null
> +++ b/lib/librte_mempool/custom_mempool.c
> @@ -0,0 +1,160 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
> + *   All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#include <string.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <stdint.h>
> +#include <inttypes.h>
> +#include <stdarg.h>
> +#include <errno.h>
> +#include <sys/queue.h>
> +
> +#include <rte_mempool.h>
> +
> +#include "rte_mempool_internal.h"
> +
> +/*
> + * Mempool
> + * =======
> + *
> + * Basic tests: done on one core with and without cache:
> + *
> + *    - Get one object, put one object
> + *    - Get two objects, put two objects
> + *    - Get all objects, test that their content is not modified and
> + *      put them back in the pool.
> + */
> +
> +#define TIME_S 5
> +#define MEMPOOL_ELT_SIZE 2048
> +#define MAX_KEEP 128
> +#define MEMPOOL_SIZE 8192
> +
> +#if 0
> +/*
> + * For our example mempool handler, we use the following struct to
> + * pass info to our create callback so it can call rte_mempool_create
> + */
> +struct custom_mempool_alloc_params {
> +	char ring_name[RTE_RING_NAMESIZE];
> +	unsigned n_elt;
> +	unsigned elt_size;
> +};
> +#endif
> +
> +/*
> + * Simple example of custom mempool structure. Holds pointers to all the
> + * elements which are simply malloc'd in this example.
> + */
> +struct custom_mempool {
> +	struct rte_ring *r;             /* Ring to manage elements */
> +	void *elements[MEMPOOL_SIZE];   /* Element pointers */
> +};
> +
> +/*
> + * Loop though all the element pointers and allocate a chunk of memory, then
> + * insert that memory into the ring.
> + */
> +static void *
> +custom_mempool_alloc(struct rte_mempool *mp,
> +		const char *name, unsigned n,
> +		__attribute__((unused)) int socket_id,
> +		__attribute__((unused)) unsigned flags)
> +
> +{
> +	static struct custom_mempool *cm;
> +	uint32_t *objnum;
> +	unsigned int i;
> +
> +	cm = malloc(sizeof(struct custom_mempool));
> +
> +	/* Create the ring so we can enqueue/dequeue */
> +	cm->r = rte_ring_create(name,
> +						rte_align32pow2(n+1), 0, 0);
> +	if (cm->r == NULL)
> +		return NULL;
> +
> +	/*
> +	 * Loop around the elements an allocate the required memory
> +	 * and place them in the ring.
> +	 * Not worried about alignment or performance for this example.
> +	 * Also, set the first 32-bits to be the element number so we
> +	 * can check later on.
> +	 */
> +	for (i = 0; i < n; i++) {
> +		cm->elements[i] = malloc(mp->elt_size);
> +		memset(cm->elements[i], 0, mp->elt_size);
> +		objnum = (uint32_t *)cm->elements[i];
> +		*objnum = i;
> +		rte_ring_sp_enqueue_bulk(cm->r, &(cm->elements[i]), 1);
> +	}
> +
> +	return cm;
> +}
> +
> +static int
> +custom_mempool_put(void *p, void * const *obj_table, unsigned n)
> +{
> +	struct custom_mempool *cm = (struct custom_mempool *)p;
> +
> +	return rte_ring_mp_enqueue_bulk(cm->r, obj_table, n);
> +}
> +
> +
> +static int
> +custom_mempool_get(void *p, void **obj_table, unsigned n)
> +{
> +	struct custom_mempool *cm = (struct custom_mempool *)p;
> +
> +	return rte_ring_mc_dequeue_bulk(cm->r, obj_table, n);
> +}
> +
> +static unsigned
> +custom_mempool_get_count(void *p)
> +{
> +	struct custom_mempool *cm = (struct custom_mempool *)p;
> +
> +	return rte_ring_count(cm->r);
> +}
> +
> +static struct rte_mempool_handler mempool_handler_custom = {
> +	.name = "custom_handler",
> +	.alloc = custom_mempool_alloc,
> +	.put = custom_mempool_put,
> +	.get = custom_mempool_get,
> +	.get_count = custom_mempool_get_count,
> +};
> +
> +REGISTER_MEMPOOL_HANDLER(mempool_handler_custom);
> +
> -- 
> 1.9.3
> 

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 0/5] add external mempool manager
  2016-01-28 17:26 ` [PATCH 0/5] add external mempool manager Jerin Jacob
@ 2016-01-29 13:40   ` Hunt, David
  2016-01-29 17:16     ` Jerin Jacob
  0 siblings, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-01-29 13:40 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dev

On 28/01/2016 17:26, Jerin Jacob wrote:
> On Tue, Jan 26, 2016 at 05:25:50PM +0000, David Hunt wrote:
>> Hi all on the list.
>>
>> Here's a proposed patch for an external mempool manager
>>
>> The External Mempool Manager is an extension to the mempool API that allows
>> users to add and use an external mempool manager, which allows external memory
>> subsystems such as external hardware memory management systems and software
>> based memory allocators to be used with DPDK.
>
> I like this approach.It will be useful for external hardware memory
> pool managers.
>
> BTW, Do you encounter any performance impact on changing to function
> pointer based approach?

Jerin,
    Thanks for your comments.

The performance impacts I've seen depends on whether I'm using an object 
cache for the mempool or not. Without object cache, I see between 0-10% 
degradation. With object cache, I see a slight performance gain of 
between 0-5%. But that will most likely vary from system to system.

>> The existing API to the internal DPDK mempool manager will remain unchanged
>> and will be backward compatible.
>>
>> There are two aspects to external mempool manager.
>>    1. Adding the code for your new mempool handler. This is achieved by adding a
>>       new mempool handler source file into the librte_mempool library, and
>>       using the REGISTER_MEMPOOL_HANDLER macro.
>>    2. Using the new API to call rte_mempool_create_ext to create a new mempool
>>       using the name parameter to identify which handler to use.
>>
>> New API calls added
>>   1. A new mempool 'create' function which accepts mempool handler name.
>>   2. A new mempool 'rte_get_mempool_handler' function which accepts mempool
>>      handler name, and returns the index to the relevant set of callbacks for
>>      that mempool handler
>>
>> Several external mempool managers may be used in the same application. A new
>> mempool can then be created by using the new 'create' function, providing the
>> mempool handler name to point the mempool to the relevant mempool manager
>> callback structure.
>>
>> The old 'create' function can still be called by legacy programs, and will
>> internally work out the mempool handle based on the flags provided (single
>> producer, single consumer, etc). By default handles are created internally to
>> implement the built-in DPDK mempool manager and mempool types.
>>
>> The external mempool manager needs to provide the following functions.
>>   1. alloc     - allocates the mempool memory, and adds each object onto a ring
>>   2. put       - puts an object back into the mempool once an application has
>>                  finished with it
>>   3. get       - gets an object from the mempool for use by the application
>>   4. get_count - gets the number of available objects in the mempool
>>   5. free      - frees the mempool memory
>>
>> Every time a get/put/get_count is called from the application/PMD, the
>> callback for that mempool is called. These functions are in the fastpath,
>> and any unoptimised handlers may limit performance.
>>
>> The new APIs are as follows:
>>
>> 1. rte_mempool_create_ext
>>
>> struct rte_mempool *
>> rte_mempool_create_ext(const char * name, unsigned n,
>>          unsigned cache_size, unsigned private_data_size,
>>          int socket_id, unsigned flags,
>>          const char * handler_name);
>>
>> 2. rte_get_mempool_handler
>>
>> int16_t
>> rte_get_mempool_handler(const char *name);
>
> Do we need above public API as, in any case we need rte_mempool* pointer to
> operate on mempools(which has the index anyway)?
>
> May a similar functional API with different name/return will be
> better to figure out, given "name" registered or not in ethernet driver
> which has dependency on a particular HW pool manager.

Good point. An earlier revision required getting the index first, then 
passing that to the create_ext call. Now that the call is by name, the 
'get' is mostly redundant. As you suggest, we may need an API for 
checking the existence of a particular manager/handler. Then again, we 
could always return an error from the create_ext api if it fails to find 
that handler. I'll remove the 'get' for the moment.

Thanks,
David.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 0/5] add external mempool manager
  2016-01-29 13:40   ` Hunt, David
@ 2016-01-29 17:16     ` Jerin Jacob
  0 siblings, 0 replies; 238+ messages in thread
From: Jerin Jacob @ 2016-01-29 17:16 UTC (permalink / raw)
  To: Hunt, David; +Cc: dev

On Fri, Jan 29, 2016 at 01:40:40PM +0000, Hunt, David wrote:
> On 28/01/2016 17:26, Jerin Jacob wrote:
> >On Tue, Jan 26, 2016 at 05:25:50PM +0000, David Hunt wrote:
> >>Hi all on the list.
> >>
> >>Here's a proposed patch for an external mempool manager
> >>
> >>The External Mempool Manager is an extension to the mempool API that allows
> >>users to add and use an external mempool manager, which allows external memory
> >>subsystems such as external hardware memory management systems and software
> >>based memory allocators to be used with DPDK.
> >
> >I like this approach.It will be useful for external hardware memory
> >pool managers.
> >
> >BTW, Do you encounter any performance impact on changing to function
> >pointer based approach?
>
> Jerin,
>    Thanks for your comments.
>
> The performance impacts I've seen depends on whether I'm using an object
> cache for the mempool or not. Without object cache, I see between 0-10%
> degradation. With object cache, I see a slight performance gain of between
> 0-5%. But that will most likely vary from system to system.
>
> >>The existing API to the internal DPDK mempool manager will remain unchanged
> >>and will be backward compatible.
> >>
> >>There are two aspects to external mempool manager.
> >>   1. Adding the code for your new mempool handler. This is achieved by adding a
> >>      new mempool handler source file into the librte_mempool library, and
> >>      using the REGISTER_MEMPOOL_HANDLER macro.
> >>   2. Using the new API to call rte_mempool_create_ext to create a new mempool
> >>      using the name parameter to identify which handler to use.
> >>
> >>New API calls added
> >>  1. A new mempool 'create' function which accepts mempool handler name.
> >>  2. A new mempool 'rte_get_mempool_handler' function which accepts mempool
> >>     handler name, and returns the index to the relevant set of callbacks for
> >>     that mempool handler
> >>
> >>Several external mempool managers may be used in the same application. A new
> >>mempool can then be created by using the new 'create' function, providing the
> >>mempool handler name to point the mempool to the relevant mempool manager
> >>callback structure.
> >>
> >>The old 'create' function can still be called by legacy programs, and will
> >>internally work out the mempool handle based on the flags provided (single
> >>producer, single consumer, etc). By default handles are created internally to
> >>implement the built-in DPDK mempool manager and mempool types.
> >>
> >>The external mempool manager needs to provide the following functions.
> >>  1. alloc     - allocates the mempool memory, and adds each object onto a ring
> >>  2. put       - puts an object back into the mempool once an application has
> >>                 finished with it
> >>  3. get       - gets an object from the mempool for use by the application
> >>  4. get_count - gets the number of available objects in the mempool
> >>  5. free      - frees the mempool memory
> >>
> >>Every time a get/put/get_count is called from the application/PMD, the
> >>callback for that mempool is called. These functions are in the fastpath,
> >>and any unoptimised handlers may limit performance.
> >>
> >>The new APIs are as follows:
> >>
> >>1. rte_mempool_create_ext
> >>
> >>struct rte_mempool *
> >>rte_mempool_create_ext(const char * name, unsigned n,
> >>         unsigned cache_size, unsigned private_data_size,
> >>         int socket_id, unsigned flags,
> >>         const char * handler_name);
> >>
> >>2. rte_get_mempool_handler
> >>
> >>int16_t
> >>rte_get_mempool_handler(const char *name);
> >
> >Do we need above public API as, in any case we need rte_mempool* pointer to
> >operate on mempools(which has the index anyway)?
> >
> >May a similar functional API with different name/return will be
> >better to figure out, given "name" registered or not in ethernet driver
> >which has dependency on a particular HW pool manager.
>
> Good point. An earlier revision required getting the index first, then
> passing that to the create_ext call. Now that the call is by name, the 'get'
> is mostly redundant. As you suggest, we may need an API for checking the
> existence of a particular manager/handler. Then again, we could always
> return an error from the create_ext api if it fails to find that handler.
> I'll remove the 'get' for the moment.

OK. But I think an API to get external pool manager name should be
required. It's useful in ethernet driver where driver needs to take care
of any special arrangement based on a specific any HW pool manager

something like below, feel free to chage the API name,

inline char* __attribute__((always_inline))
rte_mempool_ext_get_name(struct rte_mempool *mp)
{
        return (mempool_handler_list.handler[mp->handler_idx].name)
}


>
> Thanks,
> David.
>
>
>
>
>
>
>

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 1/5] mempool: add external mempool manager support
  2016-01-28 17:52   ` Jerin Jacob
@ 2016-02-03 14:16     ` Hunt, David
  2016-02-04 13:23       ` Jerin Jacob
  0 siblings, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-02-03 14:16 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dev

On 28/01/2016 17:52, Jerin Jacob wrote:
> On Tue, Jan 26, 2016 at 05:25:51PM +0000, David Hunt wrote:
>> Adds the new rte_mempool_create_ext api and callback mechanism for
>> external mempool handlers
>>
>> Modifies the existing rte_mempool_create to set up the handler_idx to
>> the relevant mempool handler based on the handler name:
>> 	ring_sp_sc
>> 	ring_mp_mc
>> 	ring_sp_mc
>> 	ring_mp_sc
>>
>> Signed-off-by: David Hunt <david.hunt@intel.com>
>> ---
>>   app/test/test_mempool_perf.c              |   1 -
>>   lib/librte_mempool/Makefile               |   1 +
>>   lib/librte_mempool/rte_mempool.c          | 210 +++++++++++++++++++--------
>>   lib/librte_mempool/rte_mempool.h          | 207 +++++++++++++++++++++++----
>>   lib/librte_mempool/rte_mempool_default.c  | 229 ++++++++++++++++++++++++++++++
>>   lib/librte_mempool/rte_mempool_internal.h |  74 ++++++++++
>>   6 files changed, 634 insertions(+), 88 deletions(-)
>>   create mode 100644 lib/librte_mempool/rte_mempool_default.c
>>   create mode 100644 lib/librte_mempool/rte_mempool_internal.h
>>
>> diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
>> index cdc02a0..091c1df 100644
>> --- a/app/test/test_mempool_perf.c
>> +++ b/app/test/test_mempool_perf.c
>> @@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
>>   							   n_get_bulk);
>>   				if (unlikely(ret < 0)) {
>>   					rte_mempool_dump(stdout, mp);
>> -					rte_ring_dump(stdout, mp->ring);
>>   					/* in this case, objects are lost... */
>>   					return -1;
>>   				}
>> diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
>> index a6898ef..7c81ef6 100644
>> --- a/lib/librte_mempool/Makefile
>> +++ b/lib/librte_mempool/Makefile
>> @@ -42,6 +42,7 @@ LIBABIVER := 1
>>
>>   # all source are stored in SRCS-y
>>   SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
>> +SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
>>   ifeq ($(CONFIG_RTE_LIBRTE_XEN_DOM0),y)
>>   SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_dom0_mempool.c
>>   endif
>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
>> index aff5f6d..8c01838 100644
>> --- a/lib/librte_mempool/rte_mempool.c
>> +++ b/lib/librte_mempool/rte_mempool.c
>> @@ -59,10 +59,11 @@
>>   #include <rte_spinlock.h>
>>
>>   #include "rte_mempool.h"
>> +#include "rte_mempool_internal.h"
>>
>>   TAILQ_HEAD(rte_mempool_list, rte_tailq_entry);
>>
>> -static struct rte_tailq_elem rte_mempool_tailq = {
>> +struct rte_tailq_elem rte_mempool_tailq = {
>>   	.name = "RTE_MEMPOOL",
>>   };
>>   EAL_REGISTER_TAILQ(rte_mempool_tailq)
>> @@ -149,7 +150,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, uint32_t obj_idx,
>>   		obj_init(mp, obj_init_arg, obj, obj_idx);
>>
>>   	/* enqueue in ring */
>> -	rte_ring_sp_enqueue(mp->ring, obj);
>> +	rte_mempool_ext_put_bulk(mp, &obj, 1);
>>   }
>>
>>   uint32_t
>> @@ -375,48 +376,28 @@ rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz,
>>   	return usz;
>>   }
>>
>> -#ifndef RTE_LIBRTE_XEN_DOM0
>> -/* stub if DOM0 support not configured */
>> -struct rte_mempool *
>> -rte_dom0_mempool_create(const char *name __rte_unused,
>> -			unsigned n __rte_unused,
>> -			unsigned elt_size __rte_unused,
>> -			unsigned cache_size __rte_unused,
>> -			unsigned private_data_size __rte_unused,
>> -			rte_mempool_ctor_t *mp_init __rte_unused,
>> -			void *mp_init_arg __rte_unused,
>> -			rte_mempool_obj_ctor_t *obj_init __rte_unused,
>> -			void *obj_init_arg __rte_unused,
>> -			int socket_id __rte_unused,
>> -			unsigned flags __rte_unused)
>> -{
>> -	rte_errno = EINVAL;
>> -	return NULL;
>> -}
>> -#endif
>> -
>>   /* create the mempool */
>>   struct rte_mempool *
>>   rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
>> -		   unsigned cache_size, unsigned private_data_size,
>> -		   rte_mempool_ctor_t *mp_init, void *mp_init_arg,
>> -		   rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
>> -		   int socket_id, unsigned flags)
>> +			unsigned cache_size, unsigned private_data_size,
>> +			rte_mempool_ctor_t *mp_init, void *mp_init_arg,
>> +			rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
>> +			int socket_id, unsigned flags)
>>   {
>>   	if (rte_xen_dom0_supported())
>>   		return rte_dom0_mempool_create(name, n, elt_size,
>> -					       cache_size, private_data_size,
>> -					       mp_init, mp_init_arg,
>> -					       obj_init, obj_init_arg,
>> -					       socket_id, flags);
>> +			cache_size, private_data_size,
>> +			mp_init, mp_init_arg,
>> +			obj_init, obj_init_arg,
>> +			socket_id, flags);
>>   	else
>>   		return rte_mempool_xmem_create(name, n, elt_size,
>> -					       cache_size, private_data_size,
>> -					       mp_init, mp_init_arg,
>> -					       obj_init, obj_init_arg,
>> -					       socket_id, flags,
>> -					       NULL, NULL, MEMPOOL_PG_NUM_DEFAULT,
>> -					       MEMPOOL_PG_SHIFT_MAX);
>> +			cache_size, private_data_size,
>> +			mp_init, mp_init_arg,
>> +			obj_init, obj_init_arg,
>> +			socket_id, flags,
>> +			NULL, NULL,
>> +			MEMPOOL_PG_NUM_DEFAULT, MEMPOOL_PG_SHIFT_MAX);
>>   }
>>
>>   /*
>> @@ -435,11 +416,9 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
>>   		const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift)
>>   {
>>   	char mz_name[RTE_MEMZONE_NAMESIZE];
>> -	char rg_name[RTE_RING_NAMESIZE];
>>   	struct rte_mempool_list *mempool_list;
>>   	struct rte_mempool *mp = NULL;
>>   	struct rte_tailq_entry *te;
>> -	struct rte_ring *r;
>>   	const struct rte_memzone *mz;
>>   	size_t mempool_size;
>>   	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
>> @@ -469,7 +448,7 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
>>
>>   	/* asked cache too big */
>>   	if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE ||
>> -	    CALC_CACHE_FLUSHTHRESH(cache_size) > n) {
>> +		CALC_CACHE_FLUSHTHRESH(cache_size) > n) {
>>   		rte_errno = EINVAL;
>>   		return NULL;
>>   	}
>> @@ -502,16 +481,8 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
>>   		return NULL;
>>   	}
>>
>> -	rte_rwlock_write_lock(RTE_EAL_MEMPOOL_RWLOCK);
>>
>> -	/* allocate the ring that will be used to store objects */
>> -	/* Ring functions will return appropriate errors if we are
>> -	 * running as a secondary process etc., so no checks made
>> -	 * in this function for that condition */
>> -	snprintf(rg_name, sizeof(rg_name), RTE_MEMPOOL_MZ_FORMAT, name);
>> -	r = rte_ring_create(rg_name, rte_align32pow2(n+1), socket_id, rg_flags);
>> -	if (r == NULL)
>> -		goto exit;
>> +	rte_rwlock_write_lock(RTE_EAL_MEMPOOL_RWLOCK);
>>
>>   	/*
>>   	 * reserve a memory zone for this mempool: private data is
>> @@ -588,7 +559,6 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
>>   	memset(mp, 0, sizeof(*mp));
>>   	snprintf(mp->name, sizeof(mp->name), "%s", name);
>>   	mp->phys_addr = mz->phys_addr;
>> -	mp->ring = r;
>>   	mp->size = n;
>>   	mp->flags = flags;
>>   	mp->elt_size = objsz.elt_size;
>> @@ -598,6 +568,22 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
>>   	mp->cache_flushthresh = CALC_CACHE_FLUSHTHRESH(cache_size);
>>   	mp->private_data_size = private_data_size;
>>
>> +	/*
>> +	 * Since we have 4 combinations of the SP/SC/MP/MC, and stack,
>> +	 * examine the
>> +	 * flags to set the correct index into the handler table.
>> +	 */
>> +	if (flags & MEMPOOL_F_USE_STACK)
>> +		mp->handler_idx = rte_get_mempool_handler("stack");
>> +	else if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
>> +		mp->handler_idx = rte_get_mempool_handler("ring_sp_sc");
>> +	else if (flags & MEMPOOL_F_SP_PUT)
>> +		mp->handler_idx = rte_get_mempool_handler("ring_sp_mc");
>> +	else if (flags & MEMPOOL_F_SC_GET)
>> +		mp->handler_idx = rte_get_mempool_handler("ring_mp_sc");
>> +	else
>> +		mp->handler_idx = rte_get_mempool_handler("ring_mp_mc");
>> +
>
> Why still use flag based selection? Why not "name" based? See below
> for more description


The old API does not have a 'name' parameter, so needs to work out which
handler to use based on the flags. This is not necessary in the new API 
call, and so it uses the "name" based index.


>>   	/* calculate address of the first element for continuous mempool. */
>>   	obj = (char *)mp + MEMPOOL_HEADER_SIZE(mp, pg_num) +
>>   		private_data_size;
>> @@ -613,7 +599,6 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
>>   		mp->elt_va_start = (uintptr_t)obj;
>>   		mp->elt_pa[0] = mp->phys_addr +
>>   			(mp->elt_va_start - (uintptr_t)mp);
>> -
>>   	/* mempool elements in a separate chunk of memory. */
>>   	} else {
>>   		mp->elt_va_start = (uintptr_t)vaddr;
>> @@ -622,6 +607,10 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
>>
>>   	mp->elt_va_end = mp->elt_va_start;
>>
>> +	/* Parameters are setup. Call the mempool handler alloc */
>> +	if ((rte_mempool_ext_alloc(mp, name, n, socket_id, flags)) == NULL)
>> +		goto exit;
>> +
>>   	/* call the initializer */
>>   	if (mp_init)
>>   		mp_init(mp, mp_init_arg);
>> @@ -646,7 +635,7 @@ rte_mempool_count(const struct rte_mempool *mp)
>>   {
>>   	unsigned count;
>>
>> -	count = rte_ring_count(mp->ring);
>> +	count = rte_mempool_ext_get_count(mp);
>>
>>   #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>>   	{
>> @@ -681,7 +670,9 @@ rte_mempool_dump_cache(FILE *f, const struct rte_mempool *mp)
>>   	fprintf(f, "    cache_size=%"PRIu32"\n", mp->cache_size);
>>   	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
>>   		cache_count = mp->local_cache[lcore_id].len;
>> -		fprintf(f, "    cache_count[%u]=%u\n", lcore_id, cache_count);
>> +		if (cache_count > 0)
>> +			fprintf(f, "    cache_count[%u]=%u\n",
>> +						lcore_id, cache_count);
>>   		count += cache_count;
>>   	}
>>   	fprintf(f, "    total_cache_count=%u\n", count);
>> @@ -802,14 +793,13 @@ rte_mempool_dump(FILE *f, const struct rte_mempool *mp)
>>
>>   	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
>>   	fprintf(f, "  flags=%x\n", mp->flags);
>> -	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
>>   	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->phys_addr);
>>   	fprintf(f, "  size=%"PRIu32"\n", mp->size);
>>   	fprintf(f, "  header_size=%"PRIu32"\n", mp->header_size);
>>   	fprintf(f, "  elt_size=%"PRIu32"\n", mp->elt_size);
>>   	fprintf(f, "  trailer_size=%"PRIu32"\n", mp->trailer_size);
>>   	fprintf(f, "  total_obj_size=%"PRIu32"\n",
>> -	       mp->header_size + mp->elt_size + mp->trailer_size);
>> +		   mp->header_size + mp->elt_size + mp->trailer_size);
>>
>>   	fprintf(f, "  private_data_size=%"PRIu32"\n", mp->private_data_size);
>>   	fprintf(f, "  pg_num=%"PRIu32"\n", mp->pg_num);
>> @@ -825,7 +815,7 @@ rte_mempool_dump(FILE *f, const struct rte_mempool *mp)
>>   			mp->size);
>>
>>   	cache_count = rte_mempool_dump_cache(f, mp);
>> -	common_count = rte_ring_count(mp->ring);
>> +	common_count = /* rte_ring_count(mp->ring)*/0;
>>   	if ((cache_count + common_count) > mp->size)
>>   		common_count = mp->size - cache_count;
>>   	fprintf(f, "  common_pool_count=%u\n", common_count);
>> @@ -904,7 +894,7 @@ rte_mempool_lookup(const char *name)
>>   }
>>
>>   void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *),
>> -		      void *arg)
>> +			  void *arg)
>>   {
>>   	struct rte_tailq_entry *te = NULL;
>>   	struct rte_mempool_list *mempool_list;
>> @@ -919,3 +909,111 @@ void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *),
>>
>>   	rte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK);
>>   }
>> +
>> +
>> +/* create the mempool using and external mempool manager */
>> +struct rte_mempool *
>> +rte_mempool_create_ext(const char *name, unsigned n, unsigned elt_size,
>> +			unsigned cache_size, unsigned private_data_size,
>> +			rte_mempool_ctor_t *mp_init, void *mp_init_arg,
>> +			rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
>> +			int socket_id, unsigned flags,
>> +			const char *handler_name)
>> +{
>> +	char mz_name[RTE_MEMZONE_NAMESIZE];
>> +	struct rte_mempool_list *mempool_list;
>> +	struct rte_mempool *mp = NULL;
>> +	struct rte_tailq_entry *te;
>> +	const struct rte_memzone *mz;
>> +	size_t mempool_size;
>> +	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
>> +	int rg_flags = 0;
>> +	int16_t handler_idx;
>> +
>> +	mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list);
>> +
>> +	/* asked cache too big */
>> +	if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE ||
>> +		CALC_CACHE_FLUSHTHRESH(cache_size) > n) {
>> +		rte_errno = EINVAL;
>> +		return NULL;
>> +	}
>> +
>> +	handler_idx = rte_get_mempool_handler(handler_name);
>> +	if (handler_idx < 0) {
>> +		RTE_LOG(ERR, MEMPOOL, "Cannot find mempool handler by name!\n");
>> +		goto exit;
>> +	}
>> +
>> +	/* ring flags */
>> +	if (flags & MEMPOOL_F_SP_PUT)
>> +		rg_flags |= RING_F_SP_ENQ;
>> +	if (flags & MEMPOOL_F_SC_GET)
>> +		rg_flags |= RING_F_SC_DEQ;
>> +
>
> rg_flags  not used anywhere down


Thanks. I've removed them.


>> +	rte_rwlock_write_lock(RTE_EAL_MEMPOOL_RWLOCK);
>> +
>> +	/*
>> +	 * reserve a memory zone for this mempool: private data is
>> +	 * cache-aligned
>> +	 */
>> +	private_data_size = RTE_ALIGN_CEIL(private_data_size,
>> +							RTE_MEMPOOL_ALIGN);
>> +
>> +	/* try to allocate tailq entry */
>> +	te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0);
>> +	if (te == NULL) {
>> +		RTE_LOG(ERR, MEMPOOL, "Cannot allocate tailq entry!\n");
>> +		goto exit;
>> +	}
>> +
>> +	/*
>> +	 * If user provided an external memory buffer, then use it to
>> +	 * store mempool objects. Otherwise reserve a memzone that is large
>> +	 * enough to hold mempool header and metadata plus mempool objects.
>> +	 */
>> +	mempool_size = sizeof(*mp) + private_data_size;
>> +	mempool_size = RTE_ALIGN_CEIL(mempool_size, RTE_MEMPOOL_ALIGN);
>> +
>> +	snprintf(mz_name, sizeof(mz_name), RTE_MEMPOOL_MZ_FORMAT, name);
>> +
>> +	mz = rte_memzone_reserve(mz_name, mempool_size, socket_id, mz_flags);
>> +
>> +	/* no more memory */
>> +	if (mz == NULL) {
>> +		rte_free(te);
>> +		goto exit;
>> +	}
>> +
>> +	/* init the mempool structure */
>> +	mp = mz->addr;
>> +	memset(mp, 0, sizeof(*mp));
>> +	snprintf(mp->name, sizeof(mp->name), "%s", name);
>> +	mp->phys_addr = mz->phys_addr;
>> +	mp->size = n;
>> +	mp->flags = flags;
>> +	mp->cache_size = cache_size;
>> +	mp->cache_flushthresh = CALC_CACHE_FLUSHTHRESH(cache_size);
>> +	mp->private_data_size = private_data_size;
>> +	mp->handler_idx = handler_idx;
>> +	mp->elt_size = elt_size;
>> +	mp->rt_pool = rte_mempool_ext_alloc(mp, name, n, socket_id, flags);
>
>
> IMO, We can avoid the duplicaition of above code with rte_mempool_create.
> i.e  rte_mempool_create -> rte_mempool_create_ext(..,"ring_mp_mc")


rte_mempool_create is not really a subset of rte_mempool_create_ext, so 
doing this would not be possible. I did have a look at this before 
pusing the patch, but the code was so different in each case I decided 
to leave them as is. Maybe break out the section that sets up the 
mempool structure in to a separate functinon?


>> +
>> +	/* call the initializer */
>> +	if (mp_init)
>> +		mp_init(mp, mp_init_arg);
>> +
>> +	mempool_populate(mp, n, 1, obj_init, obj_init_arg);
>> +
>> +	te->data = (void *) mp;
>> +
>> +	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
>> +	TAILQ_INSERT_TAIL(mempool_list, te, next);
>> +	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
>> +
>> +exit:
>> +	rte_rwlock_write_unlock(RTE_EAL_MEMPOOL_RWLOCK);
>> +
>> +	return mp;
>> +
>> +}
>> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
>> index 6e2390a..620cfb7 100644
>> --- a/lib/librte_mempool/rte_mempool.h
>> +++ b/lib/librte_mempool/rte_mempool.h
>> @@ -88,6 +88,8 @@ extern "C" {
>>   struct rte_mempool_debug_stats {
>>   	uint64_t put_bulk;         /**< Number of puts. */
>>   	uint64_t put_objs;         /**< Number of objects successfully put. */
>> +	uint64_t put_pool_bulk;    /**< Number of puts into pool. */
>> +	uint64_t put_pool_objs;    /**< Number of objects into pool. */
>>   	uint64_t get_success_bulk; /**< Successful allocation number. */
>>   	uint64_t get_success_objs; /**< Objects successfully allocated. */
>>   	uint64_t get_fail_bulk;    /**< Failed allocation number. */
>> @@ -123,6 +125,7 @@ struct rte_mempool_objsz {
>>   #define RTE_MEMPOOL_NAMESIZE 32 /**< Maximum length of a memory pool. */
>>   #define RTE_MEMPOOL_MZ_PREFIX "MP_"
>>
>> +
>>   /* "MP_<name>" */
>>   #define	RTE_MEMPOOL_MZ_FORMAT	RTE_MEMPOOL_MZ_PREFIX "%s"
>>
>> @@ -175,12 +178,85 @@ struct rte_mempool_objtlr {
>>   #endif
>>   };
>>
>> +/* Handler functions for external mempool support */
>> +typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp,
>> +		const char *name, unsigned n, int socket_id, unsigned flags);
>> +typedef int (*rte_mempool_put_t)(void *p,
>> +		void * const *obj_table, unsigned n);
>> +typedef int (*rte_mempool_get_t)(void *p, void **obj_table,
>> +		unsigned n);
>> +typedef unsigned (*rte_mempool_get_count)(void *p);
>> +typedef int(*rte_mempool_free_t)(struct rte_mempool *mp);
>> +
>> +/**
>> + * @internal wrapper for external mempool manager alloc callback.
>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + * @param name
>> + *   Name of the statistics field to increment in the memory pool.
>> + * @param n
>> + *   Number to add to the object-oriented statistics.
>> + * @param socket_id
>> + *   socket id on which to allocate.
>> + * @param flags
>> + *   general flags to allocate function
>> + */
>> +void *
>> +rte_mempool_ext_alloc(struct rte_mempool *mp,
>> +		const char *name, unsigned n, int socket_id, unsigned flags);
>> +
>> +/**
>> + * @internal wrapper for external mempool manager get callback.
>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + * @param obj_table
>> + *   Pointer to a table of void * pointers (objects).
>> + * @param n
>> + *	 Number of objects to get
>> + */
>> +int
>> +rte_mempool_ext_get_bulk(struct rte_mempool *mp, void **obj_table,
>> +		unsigned n);
>> +
>> +/**
>> + * @internal wrapper for external mempool manager put callback.
>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + * @param obj_table
>> + *   Pointer to a table of void * pointers (objects).
>> + * @param n
>> + *   Number of objects to put
>> + */
>> +int
>> +rte_mempool_ext_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>> +		unsigned n);
>> +
>> +/**
>> + * @internal wrapper for external mempool manager get_count callback.
>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + */
>> +int
>> +rte_mempool_ext_get_count(const struct rte_mempool *mp);
>> +
>> +/**
>> + * @internal wrapper for external mempool manager free callback.
>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + */
>> +int
>> +rte_mempool_ext_free(struct rte_mempool *mp);
>> +
>>   /**
>>    * The RTE mempool structure.
>>    */
>>   struct rte_mempool {
>>   	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
>> -	struct rte_ring *ring;           /**< Ring to store objects. */
>>   	phys_addr_t phys_addr;           /**< Phys. addr. of mempool struct. */
>>   	int flags;                       /**< Flags of the mempool. */
>>   	uint32_t size;                   /**< Size of the mempool. */
>> @@ -194,6 +270,11 @@ struct rte_mempool {
>>
>>   	unsigned private_data_size;      /**< Size of private data. */
>>
>> +	/* Common pool data structure pointer */
>> +	void *rt_pool __rte_cache_aligned;
>
> Do we need to split rt_pool to next cache line, "cache_size"
> variable, etc are used in fast path, and one more cache line will occupy
> for this change

OK, I'll take out the split.

>> +
>> +	int16_t handler_idx;
>> +
>>   #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>>   	/** Per-lcore local cache. */
>>   	struct rte_mempool_cache local_cache[RTE_MAX_LCORE];
>> @@ -223,6 +304,10 @@ struct rte_mempool {
>>   #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
>>   #define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
>>   #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
>> +#define MEMPOOL_F_USE_STACK      0x0010 /**< Use a stack for the common pool. */
>> +#define MEMPOOL_F_USE_TM         0x0020
>> +#define MEMPOOL_F_NO_SECONDARY   0x0040
>> +
>>
>>   /**
>>    * @internal When debug is enabled, store some statistics.
>> @@ -728,7 +813,6 @@ rte_dom0_mempool_create(const char *name, unsigned n, unsigned elt_size,
>>   		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
>>   		int socket_id, unsigned flags);
>>
>> -
>>   /**
>>    * Dump the status of the mempool to the console.
>>    *
>> @@ -753,7 +837,7 @@ void rte_mempool_dump(FILE *f, const struct rte_mempool *mp);
>>    */
>>   static inline void __attribute__((always_inline))
>>   __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>> -		    unsigned n, int is_mp)
>> +		    unsigned n, __attribute__((unused)) int is_mp)
>>   {
>>   #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>>   	struct rte_mempool_cache *cache;
>> @@ -769,8 +853,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>>
>>   #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>>   	/* cache is not enabled or single producer or non-EAL thread */
>> -	if (unlikely(cache_size == 0 || is_mp == 0 ||
>> -		     lcore_id >= RTE_MAX_LCORE))
>> +	if (unlikely(cache_size == 0 || lcore_id >= RTE_MAX_LCORE))
>>   		goto ring_enqueue;
>>
>>   	/* Go straight to ring if put would overflow mem allocated for cache */
>> @@ -793,8 +876,8 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>>
>>   	cache->len += n;
>>
>> -	if (cache->len >= flushthresh) {
>> -		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
>> +	if (unlikely(cache->len >= flushthresh)) {
>> +		rte_mempool_ext_put_bulk(mp, &cache->objs[cache_size],
>>   				cache->len - cache_size);
>>   		cache->len = cache_size;
>>   	}
>> @@ -804,22 +887,10 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>>   ring_enqueue:
>>   #endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
>>
>> -	/* push remaining objects in ring */
>> -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
>> -	if (is_mp) {
>> -		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
>> -			rte_panic("cannot put objects in mempool\n");
>> -	}
>> -	else {
>> -		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
>> -			rte_panic("cannot put objects in mempool\n");
>> -	}
>> -#else
>> -	if (is_mp)
>> -		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
>> -	else
>> -		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
>> -#endif
>> +	/* Increment stats counter to tell us how many pool puts happened */
>> +	__MEMPOOL_STAT_ADD(mp, put_pool, n);
>> +
>> +	rte_mempool_ext_put_bulk(mp, obj_table, n);
>>   }
>>
>>
>> @@ -943,7 +1014,7 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
>>    */
>>   static inline int __attribute__((always_inline))
>>   __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>> -		   unsigned n, int is_mc)
>> +		   unsigned n, __attribute__((unused))int is_mc)
>>   {
>>   	int ret;
>>   #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>> @@ -954,8 +1025,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>>   	uint32_t cache_size = mp->cache_size;
>>
>>   	/* cache is not enabled or single consumer */
>> -	if (unlikely(cache_size == 0 || is_mc == 0 ||
>> -		     n >= cache_size || lcore_id >= RTE_MAX_LCORE))
>> +	if (unlikely(cache_size == 0 || n >= cache_size ||
>> +						lcore_id >= RTE_MAX_LCORE))
>>   		goto ring_dequeue;
>>
>>   	cache = &mp->local_cache[lcore_id];
>> @@ -967,7 +1038,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>>   		uint32_t req = n + (cache_size - cache->len);
>>
>>   		/* How many do we require i.e. number to fill the cache + the request */
>> -		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
>> +		ret = rte_mempool_ext_get_bulk(mp,
>> +						&cache->objs[cache->len], req);
>>   		if (unlikely(ret < 0)) {
>>   			/*
>>   			 * In the offchance that we are buffer constrained,
>> @@ -995,10 +1067,7 @@ ring_dequeue:
>>   #endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
>>
>>   	/* get remaining objects from ring */
>> -	if (is_mc)
>> -		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
>> -	else
>> -		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
>> +	ret = rte_mempool_ext_get_bulk(mp, obj_table, n);
>>
>>   	if (ret < 0)
>>   		__MEMPOOL_STAT_ADD(mp, get_fail, n);
>> @@ -1401,6 +1470,82 @@ ssize_t rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz,
>>   void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *arg),
>>   		      void *arg);
>>
>> +/**
>> + * Function to get an index to an external mempool manager
>> + *
>> + * @param name
>> + *   The name of the mempool handler to search for in the list of handlers
>> + * @return
>> + *   The index of the mempool handler in the list of registered mempool
>> + *   handlers
>> + */
>> +int16_t
>> +rte_get_mempool_handler(const char *name);
>> +
>> +
>> +/**
>> + * Create a new mempool named *name* in memory.
>> + *
>> + * This function uses an externally defined alloc callback to allocate memory.
>> + * Its size is set to n elements.
>> + * All elements of the mempool are allocated separately to the mempool header.
>> + *
>> + * @param name
>> + *   The name of the mempool.
>> + * @param n
>> + *   The number of elements in the mempool. The optimum size (in terms of
>> + *   memory usage) for a mempool is when n is a power of two minus one:
>> + *   n = (2^q - 1).
>> + * @param cache_size
>> + *   If cache_size is non-zero, the rte_mempool library will try to
>> + *   limit the accesses to the common lockless pool, by maintaining a
>> + *   per-lcore object cache. This argument must be lower or equal to
>> + *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5. It is advised to choose
>> + *   cache_size to have "n modulo cache_size == 0": if this is
>> + *   not the case, some elements will always stay in the pool and will
>> + *   never be used. The access to the per-lcore table is of course
>> + *   faster than the multi-producer/consumer pool. The cache can be
>> + *   disabled if the cache_size argument is set to 0; it can be useful to
>> + *   avoid losing objects in cache. Note that even if not used, the
>> + *   memory space for cache is always reserved in a mempool structure,
>> + *   except if CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE is set to 0.
>> + * @param private_data_size
>> + *   The size of the private data appended after the mempool
>> + *   structure. This is useful for storing some private data after the
>> + *   mempool structure, as is done for rte_mbuf_pool for example.
>> + * @param mp_init
>> + *   A function pointer that is called for initialization of the pool,
>> + *   before object initialization. The user can initialize the private
>> + *   data in this function if needed. This parameter can be NULL if
>> + *   not needed.
>> + * @param mp_init_arg
>> + *   An opaque pointer to data that can be used in the mempool
>> + *   constructor function.
>> + * @param obj_init
>> + *   A function pointer that is called for each object at
>> + *   initialization of the pool. The user can set some meta data in
>> + *   objects if needed. This parameter can be NULL if not needed.
>> + *   The obj_init() function takes the mempool pointer, the init_arg,
>> + *   the object pointer and the object number as parameters.
>> + * @param obj_init_arg
>> + *   An opaque pointer to data that can be used as an argument for
>> + *   each call to the object constructor function.
>> + * @param socket_id
>> + *   The *socket_id* argument is the socket identifier in the case of
>> + *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
>> + *   constraint for the reserved zone.
>> + * @param flags
>> + * @return
>> + *   The pointer to the new allocated mempool, on success. NULL on error
>> + */
>> +struct rte_mempool *
>> +rte_mempool_create_ext(const char *name, unsigned n, unsigned elt_size,
>> +		unsigned cache_size, unsigned private_data_size,
>> +		rte_mempool_ctor_t *mp_init, void *mp_init_arg,
>> +		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
>> +		int socket_id, unsigned flags,
>> +		const char *handler_name);
>> +
>>   #ifdef __cplusplus
>>   }
>>   #endif
>> diff --git a/lib/librte_mempool/rte_mempool_default.c b/lib/librte_mempool/rte_mempool_default.c
>> new file mode 100644
>> index 0000000..2493dc1
>> --- /dev/null
>> +++ b/lib/librte_mempool/rte_mempool_default.c
>> @@ -0,0 +1,229 @@
>> +/*-
>> + *   BSD LICENSE
>> + *
>> + *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
>> + *   All rights reserved.
>> + *
>> + *   Redistribution and use in source and binary forms, with or without
>> + *   modification, are permitted provided that the following conditions
>> + *   are met:
>> + *
>> + *     * Redistributions of source code must retain the above copyright
>> + *       notice, this list of conditions and the following disclaimer.
>> + *     * Redistributions in binary form must reproduce the above copyright
>> + *       notice, this list of conditions and the following disclaimer in
>> + *       the documentation and/or other materials provided with the
>> + *       distribution.
>> + *     * Neither the name of Intel Corporation nor the names of its
>> + *       contributors may be used to endorse or promote products derived
>> + *       from this software without specific prior written permission.
>> + *
>> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
>> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
>> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
>> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
>> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
>> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
>> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
>> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
>> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
>> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
>> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
>> + */
>> +
>> +#include <stdio.h>
>> +#include <rte_mempool.h>
>> +#include <rte_malloc.h>
>> +#include <string.h>
>> +
>> +#include "rte_mempool_internal.h"
>> +
>> +/*
>> + * Indirect jump table to support external memory pools
>> + */
>> +struct rte_mempool_handler_list mempool_handler_list = {
>> +	.sl =  RTE_SPINLOCK_INITIALIZER ,
>> +	.num_handlers = 0
>> +};
>> +
>> +/* TODO Convert to older mechanism of an array of stucts */
>> +int16_t
>> +add_handler(struct rte_mempool_handler *h)
>> +{
>> +	int16_t handler_idx;
>> +
>> +	/*  */
>> +	rte_spinlock_lock(&mempool_handler_list.sl);
>> +
>> +	/* Check whether jump table has space */
>> +	if (mempool_handler_list.num_handlers >= RTE_MEMPOOL_MAX_HANDLER_IDX) {
>> +		rte_spinlock_unlock(&mempool_handler_list.sl);
>> +		RTE_LOG(ERR, MEMPOOL,
>> +				"Maximum number of mempool handlers exceeded\n");
>> +		return -1;
>> +	}
>> +
>> +	if ((h->put == NULL) || (h->get == NULL) ||
>> +		(h->get_count == NULL)) {
>> +		rte_spinlock_unlock(&mempool_handler_list.sl);
>> +		 RTE_LOG(ERR, MEMPOOL,
>> +					"Missing callback while registering mempool handler\n");
>> +		return -1;
>> +	}
>> +
>> +	/* add new handler index */
>> +	handler_idx = mempool_handler_list.num_handlers++;
>> +
>> +	snprintf(mempool_handler_list.handler[handler_idx].name,
>> +				RTE_MEMPOOL_NAMESIZE, "%s", h->name);
>> +	mempool_handler_list.handler[handler_idx].alloc = h->alloc;
>> +	mempool_handler_list.handler[handler_idx].put = h->put;
>> +	mempool_handler_list.handler[handler_idx].get = h->get;
>> +	mempool_handler_list.handler[handler_idx].get_count = h->get_count;
>> +
>> +	rte_spinlock_unlock(&mempool_handler_list.sl);
>> +
>> +	return handler_idx;
>> +}
>> +
>> +/* TODO Convert to older mechanism of an array of stucts */
>> +int16_t
>> +rte_get_mempool_handler(const char *name)
>> +{
>> +	int16_t i;
>> +
>> +	for (i = 0; i < mempool_handler_list.num_handlers; i++) {
>> +		if (!strcmp(name, mempool_handler_list.handler[i].name))
>> +			return i;
>> +	}
>> +	return -1;
>> +}
>> +
>> +static int
>> +common_ring_mp_put(void *p, void * const *obj_table, unsigned n)
>> +{
>> +	return rte_ring_mp_enqueue_bulk((struct rte_ring *)p, obj_table, n);
>> +}
>> +
>> +static int
>> +common_ring_sp_put(void *p, void * const *obj_table, unsigned n)
>> +{
>> +	return rte_ring_sp_enqueue_bulk((struct rte_ring *)p, obj_table, n);
>> +}
>> +
>> +static int
>> +common_ring_mc_get(void *p, void **obj_table, unsigned n)
>> +{
>> +	return rte_ring_mc_dequeue_bulk((struct rte_ring *)p, obj_table, n);
>> +}
>> +
>> +static int
>> +common_ring_sc_get(void *p, void **obj_table, unsigned n)
>> +{
>> +	return rte_ring_sc_dequeue_bulk((struct rte_ring *)p, obj_table, n);
>> +}
>> +
>> +static unsigned
>> +common_ring_get_count(void *p)
>> +{
>> +	return rte_ring_count((struct rte_ring *)p);
>> +}
>> +
>> +
>> +static void *
>> +rte_mempool_common_ring_alloc(struct rte_mempool *mp,
>> +		const char *name, unsigned n, int socket_id, unsigned flags)
>> +{
>> +	struct rte_ring *r;
>> +	char rg_name[RTE_RING_NAMESIZE];
>> +	int rg_flags = 0;
>> +
>> +	if (flags & MEMPOOL_F_SP_PUT)
>> +		rg_flags |= RING_F_SP_ENQ;
>> +	if (flags & MEMPOOL_F_SC_GET)
>> +		rg_flags |= RING_F_SC_DEQ;
>> +
>> +	/* allocate the ring that will be used to store objects */
>> +	/* Ring functions will return appropriate errors if we are
>> +	 * running as a secondary process etc., so no checks made
>> +	 * in this function for that condition */
>> +	snprintf(rg_name, sizeof(rg_name), "%s-ring", name);
>> +	r = rte_ring_create(rg_name, rte_align32pow2(n+1), socket_id, rg_flags);
>> +	if (r == NULL)
>> +		return NULL;
>> +
>> +	mp->rt_pool = (void *)r;
>> +
>> +	return (void *) r;
>> +}
>> +
>> +void *
>> +rte_mempool_ext_alloc(struct rte_mempool *mp,
>> +		const char *name, unsigned n, int socket_id, unsigned flags)
>> +{
>> +	if (mempool_handler_list.handler[mp->handler_idx].alloc) {
>> +		return (mempool_handler_list.handler[mp->handler_idx].alloc)
>> +						(mp, name, n, socket_id, flags);
>> +	}
>> +	return NULL;
>> +}
>> +
>> +inline int __attribute__((always_inline))
>> +rte_mempool_ext_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
>> +{
>> +	return (mempool_handler_list.handler[mp->handler_idx].get)
>> +						(mp->rt_pool, obj_table, n);
>> +}
>> +
>> +inline int __attribute__((always_inline))
>> +rte_mempool_ext_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>> +		unsigned n)
>> +{
>> +	return (mempool_handler_list.handler[mp->handler_idx].put)
>> +						(mp->rt_pool, obj_table, n);
>> +}
>> +
>> +int
>> +rte_mempool_ext_get_count(const struct rte_mempool *mp)
>> +{
>> +	return (mempool_handler_list.handler[mp->handler_idx].get_count)
>> +						(mp->rt_pool);
>> +}
>> +
>> +static struct rte_mempool_handler handler_mp_mc = {
>> +	.name = "ring_mp_mc",
>> +	.alloc = rte_mempool_common_ring_alloc,
>> +	.put = common_ring_mp_put,
>> +	.get = common_ring_mc_get,
>> +	.get_count = common_ring_get_count,
>> +	.free = NULL
>> +};
>> +static struct rte_mempool_handler handler_sp_sc = {
>> +	.name = "ring_sp_sc",
>> +	.alloc = rte_mempool_common_ring_alloc,
>> +	.put = common_ring_sp_put,
>> +	.get = common_ring_sc_get,
>> +	.get_count = common_ring_get_count,
>> +	.free = NULL
>> +};
>> +static struct rte_mempool_handler handler_mp_sc = {
>> +	.name = "ring_mp_sc",
>> +	.alloc = rte_mempool_common_ring_alloc,
>> +	.put = common_ring_mp_put,
>> +	.get = common_ring_sc_get,
>> +	.get_count = common_ring_get_count,
>> +	.free = NULL
>> +};
>> +static struct rte_mempool_handler handler_sp_mc = {
>> +	.name = "ring_sp_mc",
>> +	.alloc = rte_mempool_common_ring_alloc,
>> +	.put = common_ring_sp_put,
>> +	.get = common_ring_mc_get,
>> +	.get_count = common_ring_get_count,
>> +	.free = NULL
>> +};
>> +
>> +REGISTER_MEMPOOL_HANDLER(handler_mp_mc);
>> +REGISTER_MEMPOOL_HANDLER(handler_sp_sc);
>> +REGISTER_MEMPOOL_HANDLER(handler_mp_sc);
>> +REGISTER_MEMPOOL_HANDLER(handler_sp_mc);
>> diff --git a/lib/librte_mempool/rte_mempool_internal.h b/lib/librte_mempool/rte_mempool_internal.h
>> new file mode 100644
>> index 0000000..92b7bde
>> --- /dev/null
>> +++ b/lib/librte_mempool/rte_mempool_internal.h
>> @@ -0,0 +1,74 @@
>> +/*-
>> + *   BSD LICENSE
>> + *
>> + *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
>> + *   All rights reserved.
>> + *
>> + *   Redistribution and use in source and binary forms, with or without
>> + *   modification, are permitted provided that the following conditions
>> + *   are met:
>> + *
>> + *     * Redistributions of source code must retain the above copyright
>> + *       notice, this list of conditions and the following disclaimer.
>> + *     * Redistributions in binary form must reproduce the above copyright
>> + *       notice, this list of conditions and the following disclaimer in
>> + *       the documentation and/or other materials provided with the
>> + *       distribution.
>> + *     * Neither the name of Intel Corporation nor the names of its
>> + *       contributors may be used to endorse or promote products derived
>> + *       from this software without specific prior written permission.
>> + *
>> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
>> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
>> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
>> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
>> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
>> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
>> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
>> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
>> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
>> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
>> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
>> + */
>> +
>> +#ifndef _RTE_MEMPOOL_INTERNAL_H_
>> +#define _RTE_MEMPOOL_INTERNAL_H_
>> +
>> +#include <rte_spinlock.h>
>> +#include <rte_mempool.h>
>> +
>> +#define RTE_MEMPOOL_MAX_HANDLER_IDX 16
>> +
>> +struct rte_mempool_handler {
>> +	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool handler */
>> +
>> +	rte_mempool_alloc_t alloc;
>> +
>> +	rte_mempool_put_t put __rte_cache_aligned;
>> +
>> +	rte_mempool_get_t get __rte_cache_aligned;
>> +
>> +	rte_mempool_get_count get_count __rte_cache_aligned;
>> +
>> +	rte_mempool_free_t free __rte_cache_aligned;
>> +};
>
> IMO, The structure should cache aligned not the individual
> elements as elements are likely read only in fast path.


OK, Will have a look at some variations and take a look at performance.


>> +
>> +struct rte_mempool_handler_list {
>> +	rte_spinlock_t sl;		  /**< Spinlock for add/delete. */
>> +
>> +	int32_t num_handlers;	  /**< Number of handlers that are valid. */
>> +
>> +	/* storage for all possible handlers */
>> +	struct rte_mempool_handler handler[RTE_MEMPOOL_MAX_HANDLER_IDX];
>> +};
>> +
>> +int16_t add_handler(struct rte_mempool_handler *h);
>> +
>> +#define REGISTER_MEMPOOL_HANDLER(h) \
>> +static int16_t __attribute__((used)) testfn_##h(void);\
>> +int16_t __attribute__((constructor, used)) testfn_##h(void)\
>> +{\
>> +	return add_handler(&h);\
>> +}
>> +
>> +#endif
>> --
>> 1.9.3
>>

Thanks, comments appreciated.
Regard,
David.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 1/5] mempool: add external mempool manager support
  2016-02-03 14:16     ` Hunt, David
@ 2016-02-04 13:23       ` Jerin Jacob
  0 siblings, 0 replies; 238+ messages in thread
From: Jerin Jacob @ 2016-02-04 13:23 UTC (permalink / raw)
  To: Hunt, David; +Cc: dev

On Wed, Feb 03, 2016 at 02:16:06PM +0000, Hunt, David wrote:
> On 28/01/2016 17:52, Jerin Jacob wrote:
> >On Tue, Jan 26, 2016 at 05:25:51PM +0000, David Hunt wrote:
> >>Adds the new rte_mempool_create_ext api and callback mechanism for
> >>external mempool handlers
> >>
> >>Modifies the existing rte_mempool_create to set up the handler_idx to
> >>the relevant mempool handler based on the handler name:
> >>	ring_sp_sc
> >>	ring_mp_mc
> >>	ring_sp_mc
> >>	ring_mp_sc
> >>
> >>Signed-off-by: David Hunt <david.hunt@intel.com>
> >>---

[snip]

> >>+	if (flags & MEMPOOL_F_USE_STACK)
> >>+		mp->handler_idx = rte_get_mempool_handler("stack");
> >>+	else if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> >>+		mp->handler_idx = rte_get_mempool_handler("ring_sp_sc");
> >>+	else if (flags & MEMPOOL_F_SP_PUT)
> >>+		mp->handler_idx = rte_get_mempool_handler("ring_sp_mc");
> >>+	else if (flags & MEMPOOL_F_SC_GET)
> >>+		mp->handler_idx = rte_get_mempool_handler("ring_mp_sc");
> >>+	else
> >>+		mp->handler_idx = rte_get_mempool_handler("ring_mp_mc");
> >>+
> >
> >Why still use flag based selection? Why not "name" based? See below
> >for more description
> 
> 
> The old API does not have a 'name' parameter, so needs to work out which
> handler to use based on the flags. This is not necessary in the new API
> call, and so it uses the "name" based index.
> 
I agree. But the old API to new API mapping still can be done like
below,
rte_mempool_create -> rte_mempool_create_ext(..,"ring_mp_mc")

[snip]

> 
> >>+	/* init the mempool structure */
> >>+	mp = mz->addr;
> >>+	memset(mp, 0, sizeof(*mp));
> >>+	snprintf(mp->name, sizeof(mp->name), "%s", name);
> >>+	mp->phys_addr = mz->phys_addr;
> >>+	mp->size = n;
> >>+	mp->flags = flags;
> >>+	mp->cache_size = cache_size;
> >>+	mp->cache_flushthresh = CALC_CACHE_FLUSHTHRESH(cache_size);
> >>+	mp->private_data_size = private_data_size;
> >>+	mp->handler_idx = handler_idx;
> >>+	mp->elt_size = elt_size;
> >>+	mp->rt_pool = rte_mempool_ext_alloc(mp, name, n, socket_id, flags);
> >
> >
> >IMO, We can avoid the duplicaition of above code with rte_mempool_create.
> >i.e  rte_mempool_create -> rte_mempool_create_ext(..,"ring_mp_mc")
> 
> 
> rte_mempool_create is not really a subset of rte_mempool_create_ext, so
> doing this would not be possible. I did have a look at this before pusing
> the patch, but the code was so different in each case I decided to leave
> them as is. Maybe break out the section that sets up the mempool structure
> in to a separate functinon?

Yes, Their are a lot of common code between 
rte_mempool_create/rte_mempool_xmem_create and rte_mempool_create_ext.

IMO, we need to converge these functions. Otherwise, a new feature in
mempool would call for changing in both places and it's difficult to
maintain.
In my view, both external and internal pool manager should have the same
code for creation and just the backend put/get/alloc function
pointers will be different to maintain the functional consistency.

Thanks, comments appreciated.
Jerin

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 1/5] mempool: add external mempool manager support
  2016-01-26 17:25 ` [PATCH 1/5] mempool: add external mempool manager support David Hunt
  2016-01-28 17:52   ` Jerin Jacob
@ 2016-02-04 14:52   ` Olivier MATZ
  2016-02-04 16:47     ` Hunt, David
                       ` (2 more replies)
  1 sibling, 3 replies; 238+ messages in thread
From: Olivier MATZ @ 2016-02-04 14:52 UTC (permalink / raw)
  To: David Hunt, dev

Hi David,

Nice work, thanks !
Please see some comments below.


On 01/26/2016 06:25 PM, David Hunt wrote:

> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index aff5f6d..8c01838 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -375,48 +376,28 @@ rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz,
>   	return usz;
>   }
>
> -#ifndef RTE_LIBRTE_XEN_DOM0
> -/* stub if DOM0 support not configured */
> -struct rte_mempool *
> -rte_dom0_mempool_create(const char *name __rte_unused,
> -			unsigned n __rte_unused,
> -			unsigned elt_size __rte_unused,
> -			unsigned cache_size __rte_unused,
> -			unsigned private_data_size __rte_unused,
> -			rte_mempool_ctor_t *mp_init __rte_unused,
> -			void *mp_init_arg __rte_unused,
> -			rte_mempool_obj_ctor_t *obj_init __rte_unused,
> -			void *obj_init_arg __rte_unused,
> -			int socket_id __rte_unused,
> -			unsigned flags __rte_unused)
> -{
> -	rte_errno = EINVAL;
> -	return NULL;
> -}
> -#endif
> -

Could we move this is a separated commit?
"mempool: remove unused rte_dom0_mempool_create stub"


>   /* create the mempool */
>   struct rte_mempool *
>   rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
> -		   unsigned cache_size, unsigned private_data_size,
> -		   rte_mempool_ctor_t *mp_init, void *mp_init_arg,
> -		   rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
> -		   int socket_id, unsigned flags)
> +			unsigned cache_size, unsigned private_data_size,
> +			rte_mempool_ctor_t *mp_init, void *mp_init_arg,
> +			rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
> +			int socket_id, unsigned flags)
>   {
>   	if (rte_xen_dom0_supported())
>   		return rte_dom0_mempool_create(name, n, elt_size,
> -					       cache_size, private_data_size,
> -					       mp_init, mp_init_arg,
> -					       obj_init, obj_init_arg,
> -					       socket_id, flags);
> +			cache_size, private_data_size,
> +			mp_init, mp_init_arg,
> +			obj_init, obj_init_arg,
> +			socket_id, flags);
>   	else
>   		return rte_mempool_xmem_create(name, n, elt_size,
> -					       cache_size, private_data_size,
> -					       mp_init, mp_init_arg,
> -					       obj_init, obj_init_arg,
> -					       socket_id, flags,
> -					       NULL, NULL, MEMPOOL_PG_NUM_DEFAULT,
> -					       MEMPOOL_PG_SHIFT_MAX);
> +			cache_size, private_data_size,
> +			mp_init, mp_init_arg,
> +			obj_init, obj_init_arg,
> +			socket_id, flags,
> +			NULL, NULL,
> +			MEMPOOL_PG_NUM_DEFAULT, MEMPOOL_PG_SHIFT_MAX);
>   }

As far as I can see, you are not modifying the code here, only the
style. For better readability, it should go in another commit that
only fixes indent or style issues.

Also, I think the proper indentation is to use only one tab for the
subsequent lines.
The coding style guide (doc/guides/contributing/coding_style.rst) is
not very clear about this however.

> @@ -469,7 +448,7 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
>
>   	/* asked cache too big */
>   	if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE ||
> -	    CALC_CACHE_FLUSHTHRESH(cache_size) > n) {
> +		CALC_CACHE_FLUSHTHRESH(cache_size) > n) {
>   		rte_errno = EINVAL;
>   		return NULL;
>   	}

same here.


> @@ -598,6 +568,22 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
>   	mp->cache_flushthresh = CALC_CACHE_FLUSHTHRESH(cache_size);
>   	mp->private_data_size = private_data_size;
>
> +	/*
> +	 * Since we have 4 combinations of the SP/SC/MP/MC, and stack,
> +	 * examine the
> +	 * flags to set the correct index into the handler table.
> +	 */

nit: comment style is not correct


> +	if (flags & MEMPOOL_F_USE_STACK)
> +		mp->handler_idx = rte_get_mempool_handler("stack");

The stack handler does not exist yet, it is introduced in the next
commit. I think this code should be moved in the next commit too.


> @@ -622,6 +607,10 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
>
>   	mp->elt_va_end = mp->elt_va_start;
>
> +	/* Parameters are setup. Call the mempool handler alloc */
> +	if ((rte_mempool_ext_alloc(mp, name, n, socket_id, flags)) == NULL)
> +		goto exit;
> +

I think some memory needs to be freed here. At least 'te'.
The memzone 'mz' is never freed in this function, even before your
patch, but since Sergio's patch (commit ff909fe21f), we could fix
that issue too.
I can submit a patch for it, or if you prefer, you can fix it in
a separate patch of your series, just let me know.

> @@ -681,7 +670,9 @@ rte_mempool_dump_cache(FILE *f, const struct rte_mempool *mp)
>   	fprintf(f, "    cache_size=%"PRIu32"\n", mp->cache_size);
>   	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
>   		cache_count = mp->local_cache[lcore_id].len;
> -		fprintf(f, "    cache_count[%u]=%u\n", lcore_id, cache_count);
> +		if (cache_count > 0)
> +			fprintf(f, "    cache_count[%u]=%u\n",
> +						lcore_id, cache_count);
>   		count += cache_count;
>   	}
>   	fprintf(f, "    total_cache_count=%u\n", count);

This could also be moved in a separate commit.


> @@ -802,14 +793,13 @@ rte_mempool_dump(FILE *f, const struct rte_mempool *mp)
>   	fprintf(f, "  elt_size=%"PRIu32"\n", mp->elt_size);
>   	fprintf(f, "  trailer_size=%"PRIu32"\n", mp->trailer_size);
>   	fprintf(f, "  total_obj_size=%"PRIu32"\n",
> -	       mp->header_size + mp->elt_size + mp->trailer_size);
> +		   mp->header_size + mp->elt_size + mp->trailer_size);
>
>   	fprintf(f, "  private_data_size=%"PRIu32"\n", mp->private_data_size);
>   	fprintf(f, "  pg_num=%"PRIu32"\n", mp->pg_num);

to be moved in the "style" commit.

> @@ -825,7 +815,7 @@ rte_mempool_dump(FILE *f, const struct rte_mempool *mp)
>   			mp->size);
>
>   	cache_count = rte_mempool_dump_cache(f, mp);
> -	common_count = rte_ring_count(mp->ring);
> +	common_count = /* rte_ring_count(mp->ring)*/0;
>   	if ((cache_count + common_count) > mp->size)
>   		common_count = mp->size - cache_count;
>   	fprintf(f, "  common_pool_count=%u\n", common_count);

should it be rte_mempool_ext_get_count(mp) instead?

> @@ -904,7 +894,7 @@ rte_mempool_lookup(const char *name)
>   }
>
>   void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *),
> -		      void *arg)
> +			  void *arg)
>   {
>   	struct rte_tailq_entry *te = NULL;
>   	struct rte_mempool_list *mempool_list;

to be moved in the "style" commit.


> @@ -919,3 +909,111 @@ void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *),
>
>   	rte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK);
>   }
> +
> +
> +/* create the mempool using and external mempool manager */
> +struct rte_mempool *
> +rte_mempool_create_ext(const char *name, unsigned n, unsigned elt_size,
> +			unsigned cache_size, unsigned private_data_size,
> +			rte_mempool_ctor_t *mp_init, void *mp_init_arg,
> +			rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
> +			int socket_id, unsigned flags,
> +			const char *handler_name)
> +{

I would have used one tab here for subsequent lines.


> +	char mz_name[RTE_MEMZONE_NAMESIZE];
> +	struct rte_mempool_list *mempool_list;
> +	struct rte_mempool *mp = NULL;
> +	struct rte_tailq_entry *te;
> +	const struct rte_memzone *mz;
> +	size_t mempool_size;
> +	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
> +	int rg_flags = 0;
> +	int16_t handler_idx;
> +
> +	mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list);
> +
> +	/* asked cache too big */
> +	if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE ||
> +		CALC_CACHE_FLUSHTHRESH(cache_size) > n) {
> +		rte_errno = EINVAL;
> +		return NULL;
> +	}
> +
> +	handler_idx = rte_get_mempool_handler(handler_name);
> +	if (handler_idx < 0) {
> +		RTE_LOG(ERR, MEMPOOL, "Cannot find mempool handler by name!\n");
> +		goto exit;
> +	}
> +
> +	/* ring flags */
> +	if (flags & MEMPOOL_F_SP_PUT)
> +		rg_flags |= RING_F_SP_ENQ;
> +	if (flags & MEMPOOL_F_SC_GET)
> +		rg_flags |= RING_F_SC_DEQ;
> +
> ...

I have the same comment than Jerin here. I think it should be
factorized with rte_mempool_xmem_create() if possible. Maybe a
at least a function rte_mempool_init() could be introduced, in
the same model than rte_ring_init().


> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index 6e2390a..620cfb7 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -88,6 +88,8 @@ extern "C" {
>   struct rte_mempool_debug_stats {
>   	uint64_t put_bulk;         /**< Number of puts. */
>   	uint64_t put_objs;         /**< Number of objects successfully put. */
> +	uint64_t put_pool_bulk;    /**< Number of puts into pool. */
> +	uint64_t put_pool_objs;    /**< Number of objects into pool. */
>   	uint64_t get_success_bulk; /**< Successful allocation number. */
>   	uint64_t get_success_objs; /**< Objects successfully allocated. */
>   	uint64_t get_fail_bulk;    /**< Failed allocation number. */

I think the comment of put_pool_objs is not very clear.
Shouldn't we have the same stats for get?


> @@ -123,6 +125,7 @@ struct rte_mempool_objsz {
>   #define RTE_MEMPOOL_NAMESIZE 32 /**< Maximum length of a memory pool. */
>   #define RTE_MEMPOOL_MZ_PREFIX "MP_"
>
> +
>   /* "MP_<name>" */
>   #define	RTE_MEMPOOL_MZ_FORMAT	RTE_MEMPOOL_MZ_PREFIX "%s"
>

to be removed

> @@ -175,12 +178,85 @@ struct rte_mempool_objtlr {
>   #endif
>   };
>
> +/* Handler functions for external mempool support */
> +typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp,
> +		const char *name, unsigned n, int socket_id, unsigned flags);
> +typedef int (*rte_mempool_put_t)(void *p,
> +		void * const *obj_table, unsigned n);
> +typedef int (*rte_mempool_get_t)(void *p, void **obj_table,
> +		unsigned n);
> +typedef unsigned (*rte_mempool_get_count)(void *p);
> +typedef int(*rte_mempool_free_t)(struct rte_mempool *mp);

a space is missing after 'int'.


> +
> +/**
> + * @internal wrapper for external mempool manager alloc callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @param name
> + *   Name of the statistics field to increment in the memory pool.
> + * @param n
> + *   Number to add to the object-oriented statistics.

Are this comments correct?


> + * @param socket_id
> + *   socket id on which to allocate.
> + * @param flags
> + *   general flags to allocate function

We could add that we are talking about MEMPOOL_F_* flags.

By the way, the '@return' is missing in all declarations.


> +/**
> + * @internal wrapper for external mempool manager get_count callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + */
> +int
> +rte_mempool_ext_get_count(const struct rte_mempool *mp);

should it be unsigned instead of int?


> +
> +/**
> + * @internal wrapper for external mempool manager free callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + */
> +int
> +rte_mempool_ext_free(struct rte_mempool *mp);
> +
>   /**
>    * The RTE mempool structure.
>    */
>   struct rte_mempool {
>   	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
> -	struct rte_ring *ring;           /**< Ring to store objects. */
>   	phys_addr_t phys_addr;           /**< Phys. addr. of mempool struct. */
>   	int flags;                       /**< Flags of the mempool. */
>   	uint32_t size;                   /**< Size of the mempool. */
> @@ -194,6 +270,11 @@ struct rte_mempool {
>
>   	unsigned private_data_size;      /**< Size of private data. */
>
> +	/* Common pool data structure pointer */
> +	void *rt_pool __rte_cache_aligned;

What is the meaning of rt_pool?


> +
> +	int16_t handler_idx;
> +

I don't think I'm getting why an index is better than a pointer to
the struct rte_mempool_handler. It would simplify the add_handler()
function. See below for a detailed explaination.


> @@ -223,6 +304,10 @@ struct rte_mempool {
>   #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
>   #define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
>   #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
> +#define MEMPOOL_F_USE_STACK      0x0010 /**< Use a stack for the common pool. */

Stack is not implemented in this commit. It should be moved in next
commit.


> +#define MEMPOOL_F_USE_TM         0x0020
> +#define MEMPOOL_F_NO_SECONDARY   0x0040
> +

What are these flags?


> @@ -728,7 +813,6 @@ rte_dom0_mempool_create(const char *name, unsigned n, unsigned elt_size,
>   		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
>   		int socket_id, unsigned flags);
>
> -
>   /**
>    * Dump the status of the mempool to the console.
>    *

style


> @@ -753,7 +837,7 @@ void rte_mempool_dump(FILE *f, const struct rte_mempool *mp);
>    */
>   static inline void __attribute__((always_inline))
>   __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
> -		    unsigned n, int is_mp)
> +		    unsigned n, __attribute__((unused)) int is_mp)

You could use __rte_unused instead of __attribute__((unused))


> @@ -769,8 +853,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>
>   #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>   	/* cache is not enabled or single producer or non-EAL thread */
> -	if (unlikely(cache_size == 0 || is_mp == 0 ||
> -		     lcore_id >= RTE_MAX_LCORE))
> +	if (unlikely(cache_size == 0 || lcore_id >= RTE_MAX_LCORE))
>   		goto ring_enqueue;
>
>   	/* Go straight to ring if put would overflow mem allocated for cache */

If I understand well, we now always use the cache, even if the mempool
is single-producer. I was wondering if it would have a performance
impact... I suppose that using the cache is more efficient than the ring
in single-producer mode, so it may increase performance. Do you have an
idea of the impact here?

I think we could remove the parameter as the function is marked as
internal. The comment above should also be fixed. The same comments
apply to the get() functions.


> @@ -793,8 +876,8 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>
>   	cache->len += n;
>
> -	if (cache->len >= flushthresh) {
> -		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
> +	if (unlikely(cache->len >= flushthresh)) {
> +		rte_mempool_ext_put_bulk(mp, &cache->objs[cache_size],
>   				cache->len - cache_size);

Shouldn't we add a __MEMPOOL_STAT_ADD(mp, put_pool,
   cache->len - cache_size) here ?

> @@ -954,8 +1025,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>   	uint32_t cache_size = mp->cache_size;
>
>   	/* cache is not enabled or single consumer */
> -	if (unlikely(cache_size == 0 || is_mc == 0 ||
> -		     n >= cache_size || lcore_id >= RTE_MAX_LCORE))
> +	if (unlikely(cache_size == 0 || n >= cache_size ||
> +						lcore_id >= RTE_MAX_LCORE))

incorrect indent


> @@ -967,7 +1038,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>   		uint32_t req = n + (cache_size - cache->len);
>
>   		/* How many do we require i.e. number to fill the cache + the request */
> -		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
> +		ret = rte_mempool_ext_get_bulk(mp,
> +						&cache->objs[cache->len], req);

indent


> +/**
> + * Function to get an index to an external mempool manager
> + *
> + * @param name
> + *   The name of the mempool handler to search for in the list of handlers
> + * @return
> + *   The index of the mempool handler in the list of registered mempool
> + *   handlers
> + */
> +int16_t
> +rte_get_mempool_handler(const char *name);

I would prefer a function like this:

const struct rte_mempool_handler *
rte_get_mempool_handler(const char *name);

(detailed explaination below)


> diff --git a/lib/librte_mempool/rte_mempool_default.c b/lib/librte_mempool/rte_mempool_default.c
> new file mode 100644
> index 0000000..2493dc1
> --- /dev/null
> +++ b/lib/librte_mempool/rte_mempool_default.c
> +#include "rte_mempool_internal.h"
> +
> +/*
> + * Indirect jump table to support external memory pools
> + */
> +struct rte_mempool_handler_list mempool_handler_list = {
> +	.sl =  RTE_SPINLOCK_INITIALIZER ,
> +	.num_handlers = 0
> +};
> +
> +/* TODO Convert to older mechanism of an array of stucts */
> +int16_t
> +add_handler(struct rte_mempool_handler *h)
> +{
> +	int16_t handler_idx;
> +
> +	/*  */
> +	rte_spinlock_lock(&mempool_handler_list.sl);
> +
> +	/* Check whether jump table has space */
> +	if (mempool_handler_list.num_handlers >= RTE_MEMPOOL_MAX_HANDLER_IDX) {
> +		rte_spinlock_unlock(&mempool_handler_list.sl);
> +		RTE_LOG(ERR, MEMPOOL,
> +				"Maximum number of mempool handlers exceeded\n");
> +		return -1;
> +	}
> +
> +	if ((h->put == NULL) || (h->get == NULL) ||
> +		(h->get_count == NULL)) {
> +		rte_spinlock_unlock(&mempool_handler_list.sl);
> +		 RTE_LOG(ERR, MEMPOOL,
> +					"Missing callback while registering mempool handler\n");
> +		return -1;
> +	}
> +
> +	/* add new handler index */
> +	handler_idx = mempool_handler_list.num_handlers++;
> +
> +	snprintf(mempool_handler_list.handler[handler_idx].name,
> +				RTE_MEMPOOL_NAMESIZE, "%s", h->name);
> +	mempool_handler_list.handler[handler_idx].alloc = h->alloc;
> +	mempool_handler_list.handler[handler_idx].put = h->put;
> +	mempool_handler_list.handler[handler_idx].get = h->get;
> +	mempool_handler_list.handler[handler_idx].get_count = h->get_count;
> +
> +	rte_spinlock_unlock(&mempool_handler_list.sl);
> +
> +	return handler_idx;
> +}

Why not using a similar mechanism than what we have for PMDs?

	void rte_eal_driver_register(struct rte_driver *driver)
	{
		TAILQ_INSERT_TAIL(&dev_driver_list, driver, next);
	}

To do that, you just need to add a TAILQ_ENTRY() in your
rte_mempool_handler structure. This would avoid to duplicate the
structure into a static array whose size is limited.

Accessing to the callbacks would be easier:

	return mp->mp_handler->put(mp->rt_pool, obj_table, n);

instead of:

	return (mempool_handler_list.handler[mp->handler_idx].put)
					(mp->rt_pool, obj_table, n);

If we really want to copy the handlers somewhere, it could be in
the mempool structure. It would avoid an extra dereference
(note the first '.' instead of '->'):

	return mp.mp_handler->put(mp->rt_pool, obj_table, n);

After doing that, we could ask ourself if the wrappers are still
useful or not. I would have say that they could be removed.


The spinlock could be kept, although it may look a bit overkill:
- I don't expect to have several loading at the same time
- There is no unregister() function, so there is no risk to
   browse the list atomically

Last thing, I think this code should go in rte_mempool.c, not in
rte_mempool_default.c.


> +
> +/* TODO Convert to older mechanism of an array of stucts */
> +int16_t
> +rte_get_mempool_handler(const char *name)
> +{
> +	int16_t i;
> +
> +	for (i = 0; i < mempool_handler_list.num_handlers; i++) {
> +		if (!strcmp(name, mempool_handler_list.handler[i].name))
> +			return i;
> +	}
> +	return -1;
> +}

This would be replaced by a TAILQ_FOREACH().


> +static void *
> +rte_mempool_common_ring_alloc(struct rte_mempool *mp,
> +		const char *name, unsigned n, int socket_id, unsigned flags)
> +{
> +	struct rte_ring *r;
> +	char rg_name[RTE_RING_NAMESIZE];
> +	int rg_flags = 0;
> +
> +	if (flags & MEMPOOL_F_SP_PUT)
> +		rg_flags |= RING_F_SP_ENQ;
> +	if (flags & MEMPOOL_F_SC_GET)
> +		rg_flags |= RING_F_SC_DEQ;
> +
> +	/* allocate the ring that will be used to store objects */
> +	/* Ring functions will return appropriate errors if we are
> +	 * running as a secondary process etc., so no checks made
> +	 * in this function for that condition */
> +	snprintf(rg_name, sizeof(rg_name), "%s-ring", name);
> +	r = rte_ring_create(rg_name, rte_align32pow2(n+1), socket_id, rg_flags);
> +	if (r == NULL)
> +		return NULL;
> +
> +	mp->rt_pool = (void *)r;
> +
> +	return (void *) r;

I don't think the explicit casts are required.

> --- /dev/null
> +++ b/lib/librte_mempool/rte_mempool_internal.h

Is it the proper name?
We could imagine a mempool handler provided by a plugin, and
in this case this code should go in rte_mempool.h.

> +
> +struct rte_mempool_handler {
> +	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool handler */

I would use a const char * here instead.


> +
> +	rte_mempool_alloc_t alloc;
> +
> +	rte_mempool_put_t put __rte_cache_aligned;
> +
> +	rte_mempool_get_t get __rte_cache_aligned;
> +
> +	rte_mempool_get_count get_count __rte_cache_aligned;
> +
> +	rte_mempool_free_t free __rte_cache_aligned;
> +};

I agree with Jerin's comments. I don't think we should cache
align each field. Maybe the whole structure.

> +
> +struct rte_mempool_handler_list {
> +	rte_spinlock_t sl;		  /**< Spinlock for add/delete. */
> +
> +	int32_t num_handlers;	  /**< Number of handlers that are valid. */
> +
> +	/* storage for all possible handlers */
> +	struct rte_mempool_handler handler[RTE_MEMPOOL_MAX_HANDLER_IDX];
> +};
> +
> +int16_t add_handler(struct rte_mempool_handler *h);

I think it should be called rte_mempool_register_handler().

> +
> +#define REGISTER_MEMPOOL_HANDLER(h) \
> +static int16_t __attribute__((used)) testfn_##h(void);\
> +int16_t __attribute__((constructor, used)) testfn_##h(void)\
> +{\
> +	return add_handler(&h);\
> +}
> +
> +#endif
>



Regards,
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 2/5] memool: add stack (lifo) based external mempool handler
  2016-01-26 17:25 ` [PATCH 2/5] memool: add stack (lifo) based external mempool handler David Hunt
@ 2016-02-04 15:02   ` Olivier MATZ
  0 siblings, 0 replies; 238+ messages in thread
From: Olivier MATZ @ 2016-02-04 15:02 UTC (permalink / raw)
  To: David Hunt, dev

Hi,

> [PATCH 2/5] memool: add stack (lifo) based external mempool handler

typo in the patch title: memool -> mempool


On 01/26/2016 06:25 PM, David Hunt wrote:
> adds a simple stack based mempool handler
>
> Signed-off-by: David Hunt <david.hunt@intel.com>

What is the purpose of this mempool handler?

Is it an example or is it something that could be useful for
dpdk applications?

If it's just an example, I think we could move this code
in app/test.

> --- a/app/test/test_mempool_perf.c
> +++ b/app/test/test_mempool_perf.c
> @@ -52,7 +52,6 @@
>   #include <rte_lcore.h>
>   #include <rte_atomic.h>
>   #include <rte_branch_prediction.h>
> -#include <rte_ring.h>
>   #include <rte_mempool.h>
>   #include <rte_spinlock.h>
>   #include <rte_malloc.h>

Is this change related?



> +struct rte_mempool_common_stack {
> +	/* Spinlock to protect access */
> +	rte_spinlock_t sl;
> +
> +	uint32_t size;
> +	uint32_t len;
> +	void *objs[];
> +
> +#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> +#endif

There is nothing inside the #ifdef


> +static void *
> +common_stack_alloc(struct rte_mempool *mp,
> +		const char *name, unsigned n, int socket_id, unsigned flags)
> +{
> +	struct rte_mempool_common_stack *s;
> +	char stack_name[RTE_RING_NAMESIZE];
> +
> +	int size = sizeof(*s) + (n+16)*sizeof(void *);
> +
> +	flags = flags;
> +
> +	/* Allocate our local memory structure */
> +	snprintf(stack_name, sizeof(stack_name), "%s-common-stack", name);
> +	s = rte_zmalloc_socket(stack_name,
> +					size, RTE_CACHE_LINE_SIZE, socket_id);
> +	if (s == NULL) {
> +		RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n");
> +		return NULL;
> +	}
> +
> +	/* And the spinlock we use to protect access */
> +	rte_spinlock_init(&s->sl);
> +
> +	s->size = n;
> +	mp->rt_pool = (void *) s;
> +	mp->handler_idx = rte_get_mempool_handler("stack");
> +
> +	return (void *) s;
> +}

The explicit casts could be removed I think.


> +
> +static int common_stack_put(void *p, void * const *obj_table,
> +		unsigned n)
> +{
> +	struct rte_mempool_common_stack *s =
> +				(struct rte_mempool_common_stack *)p;

indent issue (same in get() and count())

> +	void **cache_objs;
> +	unsigned index;
> +
> +	/* Acquire lock */
> +	rte_spinlock_lock(&s->sl);
> +	cache_objs = &s->objs[s->len];
> +
> +	/* Is there sufficient space in the stack ? */
> +	if ((s->len + n) > s->size) {
> +		rte_spinlock_unlock(&s->sl);
> +		return -ENOENT;
> +	}

I think this cannot happen as there is a check in the get().
I wonder if a rte_panic() wouldn't be better here.



Regards,
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 1/5] mempool: add external mempool manager support
  2016-02-04 14:52   ` Olivier MATZ
@ 2016-02-04 16:47     ` Hunt, David
  2016-02-08 11:02       ` Olivier MATZ
  2016-02-04 17:34     ` Hunt, David
  2016-03-01 13:32     ` Hunt, David
  2 siblings, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-02-04 16:47 UTC (permalink / raw)
  To: Olivier MATZ, dev

On 04/02/2016 14:52, Olivier MATZ wrote:
> Hi David,
>
> Nice work, thanks !
> Please see some comments below.
>
>

[snip]


Olivier,
     Thanks for your comprehensive comments. I'm working on a v2 patch 
based on feedback already received from Jerin, and I'll be sure to 
include your feedback also.
Many thanks,
David.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 1/5] mempool: add external mempool manager support
  2016-02-04 14:52   ` Olivier MATZ
  2016-02-04 16:47     ` Hunt, David
@ 2016-02-04 17:34     ` Hunt, David
  2016-02-05  9:26       ` Olivier MATZ
  2016-03-01 13:32     ` Hunt, David
  2 siblings, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-02-04 17:34 UTC (permalink / raw)
  To: Olivier MATZ, dev

On 04/02/2016 14:52, Olivier MATZ wrote:
> Hi David,

[snip]

Just a comment on one of your comments:

> Why not using a similar mechanism than what we have for PMDs?
>
>      void rte_eal_driver_register(struct rte_driver *driver)
>      {
>          TAILQ_INSERT_TAIL(&dev_driver_list, driver, next);
>      }
>
> To do that, you just need to add a TAILQ_ENTRY() in your
> rte_mempool_handler structure. This would avoid to duplicate the
> structure into a static array whose size is limited.
>
> Accessing to the callbacks would be easier:
>
>      return mp->mp_handler->put(mp->rt_pool, obj_table, n);

One of the iterations of the code did indeed use this mechanism, however 
I ran into problems with multiple processes using the same mempool. In 
that case, the 'mp_handler' element of the mempool in your return 
statement  is only valid for one of the processes. Hence the need for 
and index that's valid for all processes rather than a pointer that's 
valid for only one. And it's not easy to quickly index into an element 
in a queue, hence the array of 16 mempool_handler structs.

[snip]

Rgds,
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 1/5] mempool: add external mempool manager support
  2016-02-04 17:34     ` Hunt, David
@ 2016-02-05  9:26       ` Olivier MATZ
  0 siblings, 0 replies; 238+ messages in thread
From: Olivier MATZ @ 2016-02-05  9:26 UTC (permalink / raw)
  To: Hunt, David, dev

Hi David,

On 02/04/2016 06:34 PM, Hunt, David wrote:
> On 04/02/2016 14:52, Olivier MATZ wrote:
>> Hi David,
> 
> [snip]
> 
> Just a comment on one of your comments:
> 
>> Why not using a similar mechanism than what we have for PMDs?
>>
>>      void rte_eal_driver_register(struct rte_driver *driver)
>>      {
>>          TAILQ_INSERT_TAIL(&dev_driver_list, driver, next);
>>      }
>>
>> To do that, you just need to add a TAILQ_ENTRY() in your
>> rte_mempool_handler structure. This would avoid to duplicate the
>> structure into a static array whose size is limited.
>>
>> Accessing to the callbacks would be easier:
>>
>>      return mp->mp_handler->put(mp->rt_pool, obj_table, n);
> 
> One of the iterations of the code did indeed use this mechanism, however
> I ran into problems with multiple processes using the same mempool. In
> that case, the 'mp_handler' element of the mempool in your return
> statement  is only valid for one of the processes. Hence the need for
> and index that's valid for all processes rather than a pointer that's
> valid for only one. And it's not easy to quickly index into an element
> in a queue, hence the array of 16 mempool_handler structs.

Oh you mean with a secondary processes, I got it now.

Are we sure we can expect that the registered handlers are the same
between multiple processes? For instance, if a handler is registered
with a plugin, the same plugins must be passed to all processes.

I don't see any better solution than yours (except removing secondary
processes of course ;) ).


Thanks for clarifying,
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 5/5] mempool: allow rte_pktmbuf_pool_create switch between memool handlers
  2016-01-26 17:25 ` [PATCH 5/5] mempool: allow rte_pktmbuf_pool_create switch between memool handlers David Hunt
@ 2016-02-05 10:11   ` Olivier MATZ
  0 siblings, 0 replies; 238+ messages in thread
From: Olivier MATZ @ 2016-02-05 10:11 UTC (permalink / raw)
  To: David Hunt, dev

Hi David,

On 01/26/2016 06:25 PM, David Hunt wrote:
> if the user wants to have rte_pktmbuf_pool_create() use an external mempool
> handler, they simply define MEMPOOL_HANDLER_NAME to be the name of the
> mempool handler they wish to use. May move this to config

I agree it could move to configuration.


> 
> Signed-off-by: David Hunt <david.hunt@intel.com>
> ---
>  lib/librte_mbuf/rte_mbuf.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
> index c18b438..362396e 100644
> --- a/lib/librte_mbuf/rte_mbuf.c
> +++ b/lib/librte_mbuf/rte_mbuf.c
> @@ -167,10 +167,21 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
>  	mbp_priv.mbuf_data_room_size = data_room_size;
>  	mbp_priv.mbuf_priv_size = priv_size;
>  
> +/* #define MEMPOOL_HANDLER_NAME "custom_handler" */
> +#undef MEMPOOL_HANDLER_NAME
> +
> +#ifndef MEMPOOL_HANDLER_NAME
>  	return rte_mempool_create(name, n, elt_size,
>  		cache_size, sizeof(struct rte_pktmbuf_pool_private),
>  		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
>  		socket_id, 0);
> +#else
> +	return rte_mempool_create_ext(name, n, elt_size,
> +		cache_size, sizeof(struct rte_pktmbuf_pool_private),
> +		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
> +		socket_id, 0,
> +		MEMPOOL_HANDLER_NAME);
> +#endif
>  }

Is it possible to always use rte_mempool_create_ext(), and set
the default handler name to 'ring_mp_mc'?

After checking in rte_mempool_create(), I found the behavior
is different when using xen. Do you know if xen still works with
your patches?

Regards,
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 1/5] mempool: add external mempool manager support
  2016-02-04 16:47     ` Hunt, David
@ 2016-02-08 11:02       ` Olivier MATZ
  0 siblings, 0 replies; 238+ messages in thread
From: Olivier MATZ @ 2016-02-08 11:02 UTC (permalink / raw)
  To: Hunt, David, dev

Hi David,

On 02/04/2016 05:47 PM, Hunt, David wrote:
> Olivier,
>     Thanks for your comprehensive comments. I'm working on a v2 patch
> based on feedback already received from Jerin, and I'll be sure to
> include your feedback also.
> Many thanks,
> David.


While answering to Keith, I realized there is the same kind of ABI
changes in your patchset too. It means it should also follow the
ABI deprecation process: dpdk/doc/guides/contributing/versioning.rst.

Regards,
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH 0/6] external mempool manager
  2016-01-26 17:25 [PATCH 0/5] add external mempool manager David Hunt
                   ` (5 preceding siblings ...)
  2016-01-28 17:26 ` [PATCH 0/5] add external mempool manager Jerin Jacob
@ 2016-02-16 14:48 ` David Hunt
  2016-02-16 14:48   ` [PATCH 1/6] mempool: add external mempool manager support David Hunt
                     ` (7 more replies)
  6 siblings, 8 replies; 238+ messages in thread
From: David Hunt @ 2016-02-16 14:48 UTC (permalink / raw)
  To: dev

Hi list.

Here's the v2 version of a proposed patch for an external mempool manager

v2 changes:
 * There was a lot of duplicate code between rte_mempool_xmem_create and
   rte_mempool_create_ext. This has now been refactored and is now
   hopefully cleaner.
 * The RTE_NEXT_ABI define is now used to allow building of the library
   in a format that is compatible with binaries built against previous
   versions of DPDK.
 * Changes out of code reviews. Hopefully I've got most of them included.

The External Mempool Manager is an extension to the mempool API that allows
users to add and use an external mempool manager, which allows external memory
subsystems such as external hardware memory management systems and software
based memory allocators to be used with DPDK.

The existing API to the internal DPDK mempool manager will remain unchanged
and will be backward compatible. However, there will be an ABI breakage, as
the mempool struct is changing. These changes are all contained withing
RTE_NEXT_ABI defs, and the current or next code can be changed with
the CONFIG_RTE_NEXT_ABI config setting

There are two aspects to external mempool manager.
  1. Adding the code for your new mempool handler. This is achieved by adding a
     new mempool handler source file into the librte_mempool library, and
     using the REGISTER_MEMPOOL_HANDLER macro.
  2. Using the new API to call rte_mempool_create_ext to create a new mempool
     using the name parameter to identify which handler to use.

New API calls added
 1. A new mempool 'create' function which accepts mempool handler name.
 2. A new mempool 'rte_get_mempool_handler' function which accepts mempool
    handler name, and returns the index to the relevant set of callbacks for
    that mempool handler

Several external mempool managers may be used in the same application. A new
mempool can then be created by using the new 'create' function, providing the
mempool handler name to point the mempool to the relevant mempool manager
callback structure.

The old 'create' function can still be called by legacy programs, and will
internally work out the mempool handle based on the flags provided (single
producer, single consumer, etc). By default handles are created internally to
implement the built-in DPDK mempool manager and mempool types.

The external mempool manager needs to provide the following functions.
 1. alloc     - allocates the mempool memory, and adds each object onto a ring
 2. put       - puts an object back into the mempool once an application has
                finished with it
 3. get       - gets an object from the mempool for use by the application
 4. get_count - gets the number of available objects in the mempool
 5. free      - frees the mempool memory

Every time a get/put/get_count is called from the application/PMD, the
callback for that mempool is called. These functions are in the fastpath,
and any unoptimised handlers may limit performance.

The new APIs are as follows:

1. rte_mempool_create_ext

struct rte_mempool *
rte_mempool_create_ext(const char * name, unsigned n,
        unsigned cache_size, unsigned private_data_size,
        int socket_id, unsigned flags,
        const char * handler_name);

2. rte_mempool_get_handler_name

char *
rte_mempool_get_handler_name(struct rte_mempool *mp);

Please see rte_mempool.h for further information on the parameters.


The important thing to note is that the mempool handler is passed by name
to rte_mempool_create_ext, and that in turn calls rte_get_mempool_handler to
get the handler index, which is stored in the rte_memool structure. This
allow multiple processes to use the same mempool, as the function pointers
are accessed via handler index.

The mempool handler structure contains callbacks to the implementation of
the handler, and is set up for registration as follows:

static struct rte_mempool_handler handler_sp_mc = {
    .name = "ring_sp_mc",
    .alloc = rte_mempool_common_ring_alloc,
    .put = common_ring_sp_put,
    .get = common_ring_mc_get,
    .get_count = common_ring_get_count,
    .free = common_ring_free,
};

And then the following macro will register the handler in the array of handlers

REGISTER_MEMPOOL_HANDLER(handler_mp_mc);

For and example of a simple malloc based mempool manager, see
lib/librte_mempool/custom_mempool.c

For an example of API usage, please see app/test/test_ext_mempool.c, which
implements a rudimentary mempool manager using simple mallocs for each
mempool object (custom_mempool.c).

David Hunt (6):
  mempool: add external mempool manager support
  mempool: add stack (lifo) based external mempool handler
  mempool: adds a simple ring-based mempool handler using mallocs for
    objects
  mempool: add autotest for external mempool custom example
  mempool: allow rte_pktmbuf_pool_create switch between memool handlers
  mempool: add in the RTE_NEXT_ABI protection for ABI breakages

 app/test/Makefile                          |   3 +
 app/test/test_ext_mempool.c                | 451 +++++++++++++++++++++++++++++
 app/test/test_mempool_perf.c               |   2 +
 config/common_bsdapp                       |   2 +
 config/common_linuxapp                     |   2 +
 lib/librte_mbuf/rte_mbuf.c                 |  15 +
 lib/librte_mempool/Makefile                |   5 +
 lib/librte_mempool/custom_mempool.c        | 146 ++++++++++
 lib/librte_mempool/rte_mempool.c           | 383 ++++++++++++++++++++++--
 lib/librte_mempool/rte_mempool.h           | 213 +++++++++++++-
 lib/librte_mempool/rte_mempool_default.c   | 236 +++++++++++++++
 lib/librte_mempool/rte_mempool_internal.h  |  75 +++++
 lib/librte_mempool/rte_mempool_stack.c     | 164 +++++++++++
 lib/librte_mempool/rte_mempool_version.map |   1 +
 14 files changed, 1665 insertions(+), 33 deletions(-)
 create mode 100644 app/test/test_ext_mempool.c
 create mode 100644 lib/librte_mempool/custom_mempool.c
 create mode 100644 lib/librte_mempool/rte_mempool_default.c
 create mode 100644 lib/librte_mempool/rte_mempool_internal.h
 create mode 100644 lib/librte_mempool/rte_mempool_stack.c

-- 
2.5.0

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH 1/6] mempool: add external mempool manager support
  2016-02-16 14:48 ` [PATCH 0/6] " David Hunt
@ 2016-02-16 14:48   ` David Hunt
  2016-02-16 19:27     ` [dpdk-dev, " Jan Viktorin
  2016-02-19 13:30     ` [PATCH " Olivier MATZ
  2016-02-16 14:48   ` [PATCH 2/6] mempool: add stack (lifo) based external mempool handler David Hunt
                     ` (6 subsequent siblings)
  7 siblings, 2 replies; 238+ messages in thread
From: David Hunt @ 2016-02-16 14:48 UTC (permalink / raw)
  To: dev

Adds the new rte_mempool_create_ext api and callback mechanism for
external mempool handlers

Modifies the existing rte_mempool_create to set up the handler_idx to
the relevant mempool handler based on the handler name:
    ring_sp_sc
    ring_mp_mc
    ring_sp_mc
    ring_mp_sc

v2: merges the duplicated code in rte_mempool_xmem_create and
rte_mempool_create_ext into one common function. The old functions
now call the new common function with the relevant parameters.

Signed-off-by: David Hunt <david.hunt@intel.com>
---
 app/test/test_mempool_perf.c               |   1 -
 lib/librte_mempool/Makefile                |   2 +
 lib/librte_mempool/rte_mempool.c           | 383 ++++++++++++++++++-----------
 lib/librte_mempool/rte_mempool.h           | 200 ++++++++++++---
 lib/librte_mempool/rte_mempool_default.c   | 236 ++++++++++++++++++
 lib/librte_mempool/rte_mempool_internal.h  |  75 ++++++
 lib/librte_mempool/rte_mempool_version.map |   1 +
 7 files changed, 717 insertions(+), 181 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_default.c
 create mode 100644 lib/librte_mempool/rte_mempool_internal.h

diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index cdc02a0..091c1df 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
 							   n_get_bulk);
 				if (unlikely(ret < 0)) {
 					rte_mempool_dump(stdout, mp);
-					rte_ring_dump(stdout, mp->ring);
 					/* in this case, objects are lost... */
 					return -1;
 				}
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index a6898ef..aeaffd1 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -42,6 +42,8 @@ LIBABIVER := 1
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
+
 ifeq ($(CONFIG_RTE_LIBRTE_XEN_DOM0),y)
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_dom0_mempool.c
 endif
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index aff5f6d..a577a3e 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -59,10 +59,11 @@
 #include <rte_spinlock.h>
 
 #include "rte_mempool.h"
+#include "rte_mempool_internal.h"
 
 TAILQ_HEAD(rte_mempool_list, rte_tailq_entry);
 
-static struct rte_tailq_elem rte_mempool_tailq = {
+struct rte_tailq_elem rte_mempool_tailq = {
 	.name = "RTE_MEMPOOL",
 };
 EAL_REGISTER_TAILQ(rte_mempool_tailq)
@@ -149,7 +150,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, uint32_t obj_idx,
 		obj_init(mp, obj_init_arg, obj, obj_idx);
 
 	/* enqueue in ring */
-	rte_ring_sp_enqueue(mp->ring, obj);
+	rte_mempool_ext_put_bulk(mp, &obj, 1);
 }
 
 uint32_t
@@ -375,26 +376,6 @@ rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz,
 	return usz;
 }
 
-#ifndef RTE_LIBRTE_XEN_DOM0
-/* stub if DOM0 support not configured */
-struct rte_mempool *
-rte_dom0_mempool_create(const char *name __rte_unused,
-			unsigned n __rte_unused,
-			unsigned elt_size __rte_unused,
-			unsigned cache_size __rte_unused,
-			unsigned private_data_size __rte_unused,
-			rte_mempool_ctor_t *mp_init __rte_unused,
-			void *mp_init_arg __rte_unused,
-			rte_mempool_obj_ctor_t *obj_init __rte_unused,
-			void *obj_init_arg __rte_unused,
-			int socket_id __rte_unused,
-			unsigned flags __rte_unused)
-{
-	rte_errno = EINVAL;
-	return NULL;
-}
-#endif
-
 /* create the mempool */
 struct rte_mempool *
 rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
@@ -420,117 +401,76 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
 }
 
 /*
+ * Common mempool create function.
  * Create the mempool over already allocated chunk of memory.
  * That external memory buffer can consists of physically disjoint pages.
  * Setting vaddr to NULL, makes mempool to fallback to original behaviour
- * and allocate space for mempool and it's elements as one big chunk of
- * physically continuos memory.
- * */
-struct rte_mempool *
-rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
+ * which will call rte_mempool_ext_alloc to allocate the object memory.
+ * If it is an intenal mempool handler, it will allocate space for mempool
+ * and it's elements as one big chunk of physically continuous memory.
+ * If it is an external mempool handler, it will allocate space for mempool
+ * and call the rte_mempool_ext_alloc for the object memory.
+ */
+static struct rte_mempool *
+mempool_create(const char *name,
+		unsigned num_elt, unsigned elt_size,
 		unsigned cache_size, unsigned private_data_size,
 		rte_mempool_ctor_t *mp_init, void *mp_init_arg,
 		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
-		int socket_id, unsigned flags, void *vaddr,
-		const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift)
+		int socket_id, unsigned flags,
+		void *vaddr, const phys_addr_t paddr[],
+		uint32_t pg_num, uint32_t pg_shift,
+		const char *handler_name)
 {
-	char mz_name[RTE_MEMZONE_NAMESIZE];
-	char rg_name[RTE_RING_NAMESIZE];
+	const struct rte_memzone *mz;
 	struct rte_mempool_list *mempool_list;
 	struct rte_mempool *mp = NULL;
 	struct rte_tailq_entry *te;
-	struct rte_ring *r;
-	const struct rte_memzone *mz;
-	size_t mempool_size;
+	char mz_name[RTE_MEMZONE_NAMESIZE];
 	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
-	int rg_flags = 0;
-	void *obj;
 	struct rte_mempool_objsz objsz;
-	void *startaddr;
+	void *startaddr = NULL;
 	int page_size = getpagesize();
-
-	/* compilation-time checks */
-	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool) &
-			  RTE_CACHE_LINE_MASK) != 0);
-#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
-	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_cache) &
-			  RTE_CACHE_LINE_MASK) != 0);
-	RTE_BUILD_BUG_ON((offsetof(struct rte_mempool, local_cache) &
-			  RTE_CACHE_LINE_MASK) != 0);
-#endif
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_debug_stats) &
-			  RTE_CACHE_LINE_MASK) != 0);
-	RTE_BUILD_BUG_ON((offsetof(struct rte_mempool, stats) &
-			  RTE_CACHE_LINE_MASK) != 0);
-#endif
+	void *obj = NULL;
+	size_t mempool_size;
 
 	mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list);
 
 	/* asked cache too big */
 	if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE ||
-	    CALC_CACHE_FLUSHTHRESH(cache_size) > n) {
+		CALC_CACHE_FLUSHTHRESH(cache_size) > num_elt) {
 		rte_errno = EINVAL;
 		return NULL;
 	}
 
-	/* check that we have both VA and PA */
-	if (vaddr != NULL && paddr == NULL) {
-		rte_errno = EINVAL;
-		return NULL;
-	}
-
-	/* Check that pg_num and pg_shift parameters are valid. */
-	if (pg_num < RTE_DIM(mp->elt_pa) || pg_shift > MEMPOOL_PG_SHIFT_MAX) {
-		rte_errno = EINVAL;
-		return NULL;
-	}
-
-	/* "no cache align" imply "no spread" */
-	if (flags & MEMPOOL_F_NO_CACHE_ALIGN)
-		flags |= MEMPOOL_F_NO_SPREAD;
+	if (flags && MEMPOOL_F_INT_HANDLER) {
+		/* Check that pg_num and pg_shift parameters are valid. */
+		if (pg_num < RTE_DIM(mp->elt_pa) ||
+				pg_shift > MEMPOOL_PG_SHIFT_MAX) {
+			rte_errno = EINVAL;
+			return NULL;
+		}
 
-	/* ring flags */
-	if (flags & MEMPOOL_F_SP_PUT)
-		rg_flags |= RING_F_SP_ENQ;
-	if (flags & MEMPOOL_F_SC_GET)
-		rg_flags |= RING_F_SC_DEQ;
+		/* "no cache align" imply "no spread" */
+		if (flags & MEMPOOL_F_NO_CACHE_ALIGN)
+			flags |= MEMPOOL_F_NO_SPREAD;
 
-	/* calculate mempool object sizes. */
-	if (!rte_mempool_calc_obj_size(elt_size, flags, &objsz)) {
-		rte_errno = EINVAL;
-		return NULL;
+		/* calculate mempool object sizes. */
+		if (!rte_mempool_calc_obj_size(elt_size, flags, &objsz)) {
+			rte_errno = EINVAL;
+			return NULL;
+		}
 	}
 
 	rte_rwlock_write_lock(RTE_EAL_MEMPOOL_RWLOCK);
 
-	/* allocate the ring that will be used to store objects */
-	/* Ring functions will return appropriate errors if we are
-	 * running as a secondary process etc., so no checks made
-	 * in this function for that condition */
-	snprintf(rg_name, sizeof(rg_name), RTE_MEMPOOL_MZ_FORMAT, name);
-	r = rte_ring_create(rg_name, rte_align32pow2(n+1), socket_id, rg_flags);
-	if (r == NULL)
-		goto exit;
-
 	/*
 	 * reserve a memory zone for this mempool: private data is
 	 * cache-aligned
 	 */
-	private_data_size = (private_data_size +
-			     RTE_MEMPOOL_ALIGN_MASK) & (~RTE_MEMPOOL_ALIGN_MASK);
+	private_data_size = RTE_ALIGN_CEIL(private_data_size,
+						RTE_MEMPOOL_ALIGN);
 
-	if (! rte_eal_has_hugepages()) {
-		/*
-		 * expand private data size to a whole page, so that the
-		 * first pool element will start on a new standard page
-		 */
-		int head = sizeof(struct rte_mempool);
-		int new_size = (private_data_size + head) % page_size;
-		if (new_size) {
-			private_data_size += page_size - new_size;
-		}
-	}
 
 	/* try to allocate tailq entry */
 	te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0);
@@ -539,23 +479,51 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 		goto exit;
 	}
 
-	/*
-	 * If user provided an external memory buffer, then use it to
-	 * store mempool objects. Otherwise reserve a memzone that is large
-	 * enough to hold mempool header and metadata plus mempool objects.
-	 */
-	mempool_size = MEMPOOL_HEADER_SIZE(mp, pg_num) + private_data_size;
-	mempool_size = RTE_ALIGN_CEIL(mempool_size, RTE_MEMPOOL_ALIGN);
-	if (vaddr == NULL)
-		mempool_size += (size_t)objsz.total_size * n;
+	if (flags && MEMPOOL_F_INT_HANDLER) {
 
-	if (! rte_eal_has_hugepages()) {
+		if (!rte_eal_has_hugepages()) {
+			/*
+			 * expand private data size to a whole page, so that the
+			 * first pool element will start on a new standard page
+			 */
+			int head = sizeof(struct rte_mempool);
+			int new_size = (private_data_size + head) % page_size;
+
+			if (new_size)
+				private_data_size += page_size - new_size;
+		}
+
+
+		/*
+		 * If user provided an external memory buffer, then use it to
+		 * store mempool objects. Otherwise reserve a memzone that is
+		 * large enough to hold mempool header and metadata plus
+		 * mempool objects
+		 */
+		mempool_size =
+			MEMPOOL_HEADER_SIZE(mp, pg_num) + private_data_size;
+		mempool_size =
+			RTE_ALIGN_CEIL(mempool_size, RTE_MEMPOOL_ALIGN);
+		if (vaddr == NULL)
+			mempool_size += (size_t)objsz.total_size * num_elt;
+
+		if (!rte_eal_has_hugepages()) {
+			/*
+			 * we want the memory pool to start on a page boundary,
+			 * because pool elements crossing page boundaries would
+			 * result in discontiguous physical addresses
+			 */
+			mempool_size += page_size;
+		}
+	} else {
 		/*
-		 * we want the memory pool to start on a page boundary,
-		 * because pool elements crossing page boundaries would
-		 * result in discontiguous physical addresses
+		 * If user provided an external memory buffer, then use it to
+		 * store mempool objects. Otherwise reserve a memzone that is
+		 * large enough to hold mempool header and metadata plus
+		 * mempool objects
 		 */
-		mempool_size += page_size;
+		mempool_size = sizeof(*mp) + private_data_size;
+		mempool_size = RTE_ALIGN_CEIL(mempool_size, RTE_MEMPOOL_ALIGN);
 	}
 
 	snprintf(mz_name, sizeof(mz_name), RTE_MEMPOOL_MZ_FORMAT, name);
@@ -563,24 +531,29 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 	mz = rte_memzone_reserve(mz_name, mempool_size, socket_id, mz_flags);
 
 	/*
-	 * no more memory: in this case we loose previously reserved
-	 * space for the ring as we cannot free it
+	 * no more memory
 	 */
 	if (mz == NULL) {
 		rte_free(te);
 		goto exit;
 	}
 
-	if (rte_eal_has_hugepages()) {
-		startaddr = (void*)mz->addr;
-	} else {
-		/* align memory pool start address on a page boundary */
-		unsigned long addr = (unsigned long)mz->addr;
-		if (addr & (page_size - 1)) {
-			addr += page_size;
-			addr &= ~(page_size - 1);
+	if (flags && MEMPOOL_F_INT_HANDLER) {
+
+		if (rte_eal_has_hugepages()) {
+			startaddr = (void *)mz->addr;
+		} else {
+			/* align memory pool start address on a page boundary */
+			unsigned long addr = (unsigned long)mz->addr;
+
+			if (addr & (page_size - 1)) {
+				addr += page_size;
+				addr &= ~(page_size - 1);
+			}
+			startaddr = (void *)addr;
 		}
-		startaddr = (void*)addr;
+	} else {
+		startaddr = (void *)mz->addr;
 	}
 
 	/* init the mempool structure */
@@ -588,8 +561,7 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 	memset(mp, 0, sizeof(*mp));
 	snprintf(mp->name, sizeof(mp->name), "%s", name);
 	mp->phys_addr = mz->phys_addr;
-	mp->ring = r;
-	mp->size = n;
+	mp->size = num_elt;
 	mp->flags = flags;
 	mp->elt_size = objsz.elt_size;
 	mp->header_size = objsz.header_size;
@@ -598,35 +570,54 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 	mp->cache_flushthresh = CALC_CACHE_FLUSHTHRESH(cache_size);
 	mp->private_data_size = private_data_size;
 
-	/* calculate address of the first element for continuous mempool. */
-	obj = (char *)mp + MEMPOOL_HEADER_SIZE(mp, pg_num) +
-		private_data_size;
-	obj = RTE_PTR_ALIGN_CEIL(obj, RTE_MEMPOOL_ALIGN);
-
-	/* populate address translation fields. */
-	mp->pg_num = pg_num;
-	mp->pg_shift = pg_shift;
-	mp->pg_mask = RTE_LEN2MASK(mp->pg_shift, typeof(mp->pg_mask));
+	mp->handler_idx = rte_get_mempool_handler_idx(handler_name);
+	if (mp->handler_idx < 0) {
+		RTE_LOG(ERR, MEMPOOL, "Cannot find mempool handler by name!\n");
+		rte_free(te);
+		goto exit;
+	}
 
-	/* mempool elements allocated together with mempool */
-	if (vaddr == NULL) {
-		mp->elt_va_start = (uintptr_t)obj;
-		mp->elt_pa[0] = mp->phys_addr +
-			(mp->elt_va_start - (uintptr_t)mp);
+	if (flags && MEMPOOL_F_INT_HANDLER) {
+		/* calculate address of first element for continuous mempool. */
+		obj = (char *)mp + MEMPOOL_HEADER_SIZE(mp, pg_num) +
+			private_data_size;
+		obj = RTE_PTR_ALIGN_CEIL(obj, RTE_MEMPOOL_ALIGN);
+
+		/* populate address translation fields. */
+		mp->pg_num = pg_num;
+		mp->pg_shift = pg_shift;
+		mp->pg_mask = RTE_LEN2MASK(mp->pg_shift, typeof(mp->pg_mask));
+
+		/* mempool elements allocated together with mempool */
+		if (vaddr == NULL) {
+			mp->elt_va_start = (uintptr_t)obj;
+			mp->elt_pa[0] = mp->phys_addr +
+				(mp->elt_va_start - (uintptr_t)mp);
+		/* mempool elements in a separate chunk of memory. */
+		} else {
+			mp->elt_va_start = (uintptr_t)vaddr;
+			memcpy(mp->elt_pa, paddr,
+				sizeof(mp->elt_pa[0]) * pg_num);
+		}
 
-	/* mempool elements in a separate chunk of memory. */
-	} else {
-		mp->elt_va_start = (uintptr_t)vaddr;
-		memcpy(mp->elt_pa, paddr, sizeof (mp->elt_pa[0]) * pg_num);
+		mp->elt_va_end = mp->elt_va_start;
 	}
 
-	mp->elt_va_end = mp->elt_va_start;
+	/* Parameters are setup. Call the mempool handler alloc */
+	mp->rt_pool =
+		rte_mempool_ext_alloc(mp, name, num_elt, socket_id, flags);
+	if (mp->rt_pool == NULL) {
+		RTE_LOG(ERR, MEMPOOL, "Failed to alloc mempool!\n");
+		rte_free(te);
+		goto exit;
+	}
 
 	/* call the initializer */
 	if (mp_init)
 		mp_init(mp, mp_init_arg);
 
-	mempool_populate(mp, n, 1, obj_init, obj_init_arg);
+	if (obj_init)
+		mempool_populate(mp, num_elt, 1, obj_init, obj_init_arg);
 
 	te->data = (void *) mp;
 
@@ -640,13 +631,79 @@ exit:
 	return mp;
 }
 
+/* Create the mempool over already allocated chunk of memory */
+struct rte_mempool *
+rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
+		unsigned cache_size, unsigned private_data_size,
+		rte_mempool_ctor_t *mp_init, void *mp_init_arg,
+		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
+		int socket_id, unsigned flags, void *vaddr,
+		const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift)
+{
+	struct rte_mempool *mp = NULL;
+	char handler_name[RTE_MEMPOOL_NAMESIZE];
+
+
+	/* compilation-time checks */
+	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool) &
+			  RTE_CACHE_LINE_MASK) != 0);
+#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
+	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_cache) &
+			  RTE_CACHE_LINE_MASK) != 0);
+	RTE_BUILD_BUG_ON((offsetof(struct rte_mempool, local_cache) &
+			  RTE_CACHE_LINE_MASK) != 0);
+#endif
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_debug_stats) &
+			  RTE_CACHE_LINE_MASK) != 0);
+	RTE_BUILD_BUG_ON((offsetof(struct rte_mempool, stats) &
+			  RTE_CACHE_LINE_MASK) != 0);
+#endif
+
+
+	/* check that we have both VA and PA */
+	if (vaddr != NULL && paddr == NULL) {
+		rte_errno = EINVAL;
+		return NULL;
+	}
+
+	/*
+	 * Since we have 4 combinations of the SP/SC/MP/MC, and stack,
+	 * examine the
+	 * flags to set the correct index into the handler table.
+	 */
+	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
+		sprintf(handler_name, "%s", "ring_sp_sc");
+	else if (flags & MEMPOOL_F_SP_PUT)
+		sprintf(handler_name, "%s", "ring_sp_mc");
+	else if (flags & MEMPOOL_F_SC_GET)
+		sprintf(handler_name, "%s", "ring_mp_sc");
+	else
+		sprintf(handler_name, "%s", "ring_mp_mc");
+
+	flags |= MEMPOOL_F_INT_HANDLER;
+
+	mp = mempool_create(name,
+		n, elt_size,
+		cache_size, private_data_size,
+		mp_init, mp_init_arg,
+		obj_init, obj_init_arg,
+		socket_id,
+		flags,
+		vaddr, paddr,
+		pg_num, pg_shift,
+		handler_name);
+
+	return mp;
+}
+
 /* Return the number of entries in the mempool */
 unsigned
 rte_mempool_count(const struct rte_mempool *mp)
 {
 	unsigned count;
 
-	count = rte_ring_count(mp->ring);
+	count = rte_mempool_ext_get_count(mp);
 
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
 	{
@@ -802,7 +859,6 @@ rte_mempool_dump(FILE *f, const struct rte_mempool *mp)
 
 	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
 	fprintf(f, "  flags=%x\n", mp->flags);
-	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
 	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->phys_addr);
 	fprintf(f, "  size=%"PRIu32"\n", mp->size);
 	fprintf(f, "  header_size=%"PRIu32"\n", mp->header_size);
@@ -825,7 +881,7 @@ rte_mempool_dump(FILE *f, const struct rte_mempool *mp)
 			mp->size);
 
 	cache_count = rte_mempool_dump_cache(f, mp);
-	common_count = rte_ring_count(mp->ring);
+	common_count = rte_mempool_ext_get_count(mp);
 	if ((cache_count + common_count) > mp->size)
 		common_count = mp->size - cache_count;
 	fprintf(f, "  common_pool_count=%u\n", common_count);
@@ -919,3 +975,30 @@ void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *),
 
 	rte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK);
 }
+
+
+/* create the mempool using an external mempool manager */
+struct rte_mempool *
+rte_mempool_create_ext(const char *name, unsigned n, unsigned elt_size,
+	unsigned cache_size, unsigned private_data_size,
+	rte_mempool_ctor_t *mp_init, void *mp_init_arg,
+	rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
+	int socket_id, unsigned flags,
+	const char *handler_name)
+{
+	struct rte_mempool *mp = NULL;
+
+	mp = mempool_create(name,
+		n, elt_size,
+		cache_size, private_data_size,
+		mp_init, mp_init_arg,
+		obj_init, obj_init_arg,
+		socket_id, flags,
+		NULL, NULL,              /* vaddr, paddr */
+		0, 0,                    /* pg_num, pg_shift, */
+		handler_name);
+
+	return mp;
+
+
+}
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 9745bf0..3705fbd 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -88,6 +88,8 @@ extern "C" {
 struct rte_mempool_debug_stats {
 	uint64_t put_bulk;         /**< Number of puts. */
 	uint64_t put_objs;         /**< Number of objects successfully put. */
+	uint64_t put_pool_bulk;    /**< Number of puts into pool. */
+	uint64_t put_pool_objs;    /**< Number of objects into pool. */
 	uint64_t get_success_bulk; /**< Successful allocation number. */
 	uint64_t get_success_objs; /**< Objects successfully allocated. */
 	uint64_t get_fail_bulk;    /**< Failed allocation number. */
@@ -175,12 +177,85 @@ struct rte_mempool_objtlr {
 #endif
 };
 
+/* Handler functions for external mempool support */
+typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp,
+		const char *name, unsigned n, int socket_id, unsigned flags);
+typedef int (*rte_mempool_put_t)(void *p,
+		void * const *obj_table, unsigned n);
+typedef int (*rte_mempool_get_t)(void *p, void **obj_table,
+		unsigned n);
+typedef unsigned (*rte_mempool_get_count)(void *p);
+typedef int (*rte_mempool_free_t)(struct rte_mempool *mp);
+
+/**
+ * @internal wrapper for external mempool manager alloc callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param name
+ *   Name of the memory pool.
+ * @param n
+ *   Number of objects in the mempool.
+ * @param socket_id
+ *   socket id on which to allocate.
+ * @param flags
+ *   general flags to allocate function (MEMPOOL_F_* flags)
+ */
+void *
+rte_mempool_ext_alloc(struct rte_mempool *mp,
+		const char *name, unsigned n, int socket_id, unsigned flags);
+
+/**
+ * @internal wrapper for external mempool manager get callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *	 Number of objects to get
+ */
+int
+rte_mempool_ext_get_bulk(struct rte_mempool *mp, void **obj_table,
+		unsigned n);
+
+/**
+ * @internal wrapper for external mempool manager put callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to put
+ */
+int
+rte_mempool_ext_put_bulk(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n);
+
+/**
+ * @internal wrapper for external mempool manager get_count callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ */
+int
+rte_mempool_ext_get_count(const struct rte_mempool *mp);
+
+/**
+ * @internal wrapper for external mempool manager free callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ */
+int
+rte_mempool_ext_free(struct rte_mempool *mp);
+
 /**
  * The RTE mempool structure.
  */
 struct rte_mempool {
 	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
-	struct rte_ring *ring;           /**< Ring to store objects. */
 	phys_addr_t phys_addr;           /**< Phys. addr. of mempool struct. */
 	int flags;                       /**< Flags of the mempool. */
 	uint32_t size;                   /**< Size of the mempool. */
@@ -194,6 +269,11 @@ struct rte_mempool {
 
 	unsigned private_data_size;      /**< Size of private data. */
 
+	/* Common pool data structure pointer */
+	void *rt_pool;
+
+	int16_t handler_idx;
+
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
 	/** Per-lcore local cache. */
 	struct rte_mempool_cache local_cache[RTE_MAX_LCORE];
@@ -223,6 +303,8 @@ struct rte_mempool {
 #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
 #define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
+#define MEMPOOL_F_INT_HANDLER    0x0020 /**< Using internal mempool handler */
+
 
 /**
  * @internal When debug is enabled, store some statistics.
@@ -753,7 +835,7 @@ void rte_mempool_dump(FILE *f, const struct rte_mempool *mp);
  */
 static inline void __attribute__((always_inline))
 __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
-		    unsigned n, int is_mp)
+		    unsigned n, __attribute__((unused)) int is_mp)
 {
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
 	struct rte_mempool_cache *cache;
@@ -769,8 +851,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
 	/* cache is not enabled or single producer or non-EAL thread */
-	if (unlikely(cache_size == 0 || is_mp == 0 ||
-		     lcore_id >= RTE_MAX_LCORE))
+	if (unlikely(cache_size == 0 || lcore_id >= RTE_MAX_LCORE))
 		goto ring_enqueue;
 
 	/* Go straight to ring if put would overflow mem allocated for cache */
@@ -793,8 +874,8 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 
 	cache->len += n;
 
-	if (cache->len >= flushthresh) {
-		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
+	if (unlikely(cache->len >= flushthresh)) {
+		rte_mempool_ext_put_bulk(mp, &cache->objs[cache_size],
 				cache->len - cache_size);
 		cache->len = cache_size;
 	}
@@ -804,22 +885,10 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 ring_enqueue:
 #endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
 
-	/* push remaining objects in ring */
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	if (is_mp) {
-		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
-	else {
-		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
-#else
-	if (is_mp)
-		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
-	else
-		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
-#endif
+	/* Increment stats counter to tell us how many pool puts happened */
+	__MEMPOOL_STAT_ADD(mp, put_pool, n);
+
+	rte_mempool_ext_put_bulk(mp, obj_table, n);
 }
 
 
@@ -943,7 +1012,7 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
  */
 static inline int __attribute__((always_inline))
 __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
-		   unsigned n, int is_mc)
+		   unsigned n, __attribute__((unused))int is_mc)
 {
 	int ret;
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
@@ -954,8 +1023,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 	uint32_t cache_size = mp->cache_size;
 
 	/* cache is not enabled or single consumer */
-	if (unlikely(cache_size == 0 || is_mc == 0 ||
-		     n >= cache_size || lcore_id >= RTE_MAX_LCORE))
+	if (unlikely(cache_size == 0 || n >= cache_size ||
+						lcore_id >= RTE_MAX_LCORE))
 		goto ring_dequeue;
 
 	cache = &mp->local_cache[lcore_id];
@@ -967,7 +1036,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 		uint32_t req = n + (cache_size - cache->len);
 
 		/* How many do we require i.e. number to fill the cache + the request */
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
+		ret = rte_mempool_ext_get_bulk(mp,
+						&cache->objs[cache->len], req);
 		if (unlikely(ret < 0)) {
 			/*
 			 * In the offchance that we are buffer constrained,
@@ -995,10 +1065,7 @@ ring_dequeue:
 #endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
 
 	/* get remaining objects from ring */
-	if (is_mc)
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
-	else
-		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
+	ret = rte_mempool_ext_get_bulk(mp, obj_table, n);
 
 	if (ret < 0)
 		__MEMPOOL_STAT_ADD(mp, get_fail, n);
@@ -1401,6 +1468,79 @@ ssize_t rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz,
 void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *arg),
 		      void *arg);
 
+/**
+ * Function to get the name of a mempool handler
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @return
+ *   The name of the mempool handler
+ */
+char *rte_mempool_get_handler_name(struct rte_mempool *mp);
+
+/**
+ * Create a new mempool named *name* in memory.
+ *
+ * This function uses an externally defined alloc callback to allocate memory.
+ * Its size is set to n elements.
+ * All elements of the mempool are allocated separately to the mempool header.
+ *
+ * @param name
+ *   The name of the mempool.
+ * @param n
+ *   The number of elements in the mempool. The optimum size (in terms of
+ *   memory usage) for a mempool is when n is a power of two minus one:
+ *   n = (2^q - 1).
+ * @param cache_size
+ *   If cache_size is non-zero, the rte_mempool library will try to
+ *   limit the accesses to the common lockless pool, by maintaining a
+ *   per-lcore object cache. This argument must be lower or equal to
+ *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5. It is advised to choose
+ *   cache_size to have "n modulo cache_size == 0": if this is
+ *   not the case, some elements will always stay in the pool and will
+ *   never be used. The access to the per-lcore table is of course
+ *   faster than the multi-producer/consumer pool. The cache can be
+ *   disabled if the cache_size argument is set to 0; it can be useful to
+ *   avoid losing objects in cache. Note that even if not used, the
+ *   memory space for cache is always reserved in a mempool structure,
+ *   except if CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE is set to 0.
+ * @param private_data_size
+ *   The size of the private data appended after the mempool
+ *   structure. This is useful for storing some private data after the
+ *   mempool structure, as is done for rte_mbuf_pool for example.
+ * @param mp_init
+ *   A function pointer that is called for initialization of the pool,
+ *   before object initialization. The user can initialize the private
+ *   data in this function if needed. This parameter can be NULL if
+ *   not needed.
+ * @param mp_init_arg
+ *   An opaque pointer to data that can be used in the mempool
+ *   constructor function.
+ * @param obj_init
+ *   A function pointer that is called for each object at
+ *   initialization of the pool. The user can set some meta data in
+ *   objects if needed. This parameter can be NULL if not needed.
+ *   The obj_init() function takes the mempool pointer, the init_arg,
+ *   the object pointer and the object number as parameters.
+ * @param obj_init_arg
+ *   An opaque pointer to data that can be used as an argument for
+ *   each call to the object constructor function.
+ * @param socket_id
+ *   The *socket_id* argument is the socket identifier in the case of
+ *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ *   constraint for the reserved zone.
+ * @param flags
+ * @return
+ *   The pointer to the new allocated mempool, on success. NULL on error
+ */
+struct rte_mempool *
+rte_mempool_create_ext(const char *name, unsigned n, unsigned elt_size,
+		unsigned cache_size, unsigned private_data_size,
+		rte_mempool_ctor_t *mp_init, void *mp_init_arg,
+		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
+		int socket_id, unsigned flags,
+		const char *handler_name);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_mempool/rte_mempool_default.c b/lib/librte_mempool/rte_mempool_default.c
new file mode 100644
index 0000000..ca3255e
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_default.c
@@ -0,0 +1,236 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <string.h>
+
+#include "rte_mempool.h"
+#include "rte_mempool_internal.h"
+
+/*
+ * Indirect jump table to support external memory pools
+ */
+struct rte_mempool_handler_list mempool_handler_list = {
+	.sl =  RTE_SPINLOCK_INITIALIZER ,
+	.num_handlers = 0
+};
+
+/*
+ * Returns the name of the mempool
+ */
+char *
+rte_mempool_get_handler_name(struct rte_mempool *mp) {
+	return mempool_handler_list.handler[mp->handler_idx].name;
+}
+
+int16_t
+rte_mempool_register_handler(struct rte_mempool_handler *h)
+{
+	int16_t handler_idx;
+
+	/*  */
+	rte_spinlock_lock(&mempool_handler_list.sl);
+
+	/* Check whether jump table has space */
+	if (mempool_handler_list.num_handlers >= RTE_MEMPOOL_MAX_HANDLER_IDX) {
+		rte_spinlock_unlock(&mempool_handler_list.sl);
+		RTE_LOG(ERR, MEMPOOL,
+				"Maximum number of mempool handlers exceeded\n");
+		return -1;
+	}
+
+	if ((h->put == NULL) || (h->get == NULL) ||
+		(h->get_count == NULL)) {
+		rte_spinlock_unlock(&mempool_handler_list.sl);
+		 RTE_LOG(ERR, MEMPOOL,
+					"Missing callback while registering mempool handler\n");
+		return -1;
+	}
+
+	/* add new handler index */
+	handler_idx = mempool_handler_list.num_handlers++;
+
+	snprintf(mempool_handler_list.handler[handler_idx].name,
+				RTE_MEMPOOL_NAMESIZE, "%s", h->name);
+	mempool_handler_list.handler[handler_idx].alloc = h->alloc;
+	mempool_handler_list.handler[handler_idx].put = h->put;
+	mempool_handler_list.handler[handler_idx].get = h->get;
+	mempool_handler_list.handler[handler_idx].get_count = h->get_count;
+
+	rte_spinlock_unlock(&mempool_handler_list.sl);
+
+	return handler_idx;
+}
+
+int16_t
+rte_get_mempool_handler_idx(const char *name)
+{
+	int16_t i;
+
+	for (i = 0; i < mempool_handler_list.num_handlers; i++) {
+		if (!strcmp(name, mempool_handler_list.handler[i].name))
+			return i;
+	}
+	return -1;
+}
+
+static int
+common_ring_mp_put(void *p, void * const *obj_table, unsigned n)
+{
+	return rte_ring_mp_enqueue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static int
+common_ring_sp_put(void *p, void * const *obj_table, unsigned n)
+{
+	return rte_ring_sp_enqueue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static int
+common_ring_mc_get(void *p, void **obj_table, unsigned n)
+{
+	return rte_ring_mc_dequeue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static int
+common_ring_sc_get(void *p, void **obj_table, unsigned n)
+{
+	return rte_ring_sc_dequeue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static unsigned
+common_ring_get_count(void *p)
+{
+	return rte_ring_count((struct rte_ring *)p);
+}
+
+
+static void *
+rte_mempool_common_ring_alloc(struct rte_mempool *mp,
+		const char *name, unsigned n, int socket_id, unsigned flags)
+{
+	struct rte_ring *r;
+	char rg_name[RTE_RING_NAMESIZE];
+	int rg_flags = 0;
+
+	if (flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	/* allocate the ring that will be used to store objects */
+	/* Ring functions will return appropriate errors if we are
+	 * running as a secondary process etc., so no checks made
+	 * in this function for that condition */
+	snprintf(rg_name, sizeof(rg_name), "%s-ring", name);
+	r = rte_ring_create(rg_name, rte_align32pow2(n+1), socket_id, rg_flags);
+	if (r == NULL)
+		return NULL;
+
+	mp->rt_pool = (void *)r;
+
+	return (void *) r;
+}
+
+void *
+rte_mempool_ext_alloc(struct rte_mempool *mp,
+		const char *name, unsigned n, int socket_id, unsigned flags)
+{
+	if (mempool_handler_list.handler[mp->handler_idx].alloc) {
+		return (mempool_handler_list.handler[mp->handler_idx].alloc)
+						(mp, name, n, socket_id, flags);
+	}
+	return NULL;
+}
+
+inline int __attribute__((always_inline))
+rte_mempool_ext_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	return (mempool_handler_list.handler[mp->handler_idx].get)
+						(mp->rt_pool, obj_table, n);
+}
+
+inline int __attribute__((always_inline))
+rte_mempool_ext_put_bulk(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	return (mempool_handler_list.handler[mp->handler_idx].put)
+						(mp->rt_pool, obj_table, n);
+}
+
+int
+rte_mempool_ext_get_count(const struct rte_mempool *mp)
+{
+	return (mempool_handler_list.handler[mp->handler_idx].get_count)
+						(mp->rt_pool);
+}
+
+static struct rte_mempool_handler handler_mp_mc = {
+	.name = "ring_mp_mc",
+	.alloc = rte_mempool_common_ring_alloc,
+	.put = common_ring_mp_put,
+	.get = common_ring_mc_get,
+	.get_count = common_ring_get_count,
+	.free = NULL
+};
+static struct rte_mempool_handler handler_sp_sc = {
+	.name = "ring_sp_sc",
+	.alloc = rte_mempool_common_ring_alloc,
+	.put = common_ring_sp_put,
+	.get = common_ring_sc_get,
+	.get_count = common_ring_get_count,
+	.free = NULL
+};
+static struct rte_mempool_handler handler_mp_sc = {
+	.name = "ring_mp_sc",
+	.alloc = rte_mempool_common_ring_alloc,
+	.put = common_ring_mp_put,
+	.get = common_ring_sc_get,
+	.get_count = common_ring_get_count,
+	.free = NULL
+};
+static struct rte_mempool_handler handler_sp_mc = {
+	.name = "ring_sp_mc",
+	.alloc = rte_mempool_common_ring_alloc,
+	.put = common_ring_sp_put,
+	.get = common_ring_mc_get,
+	.get_count = common_ring_get_count,
+	.free = NULL
+};
+
+REGISTER_MEMPOOL_HANDLER(handler_mp_mc);
+REGISTER_MEMPOOL_HANDLER(handler_sp_sc);
+REGISTER_MEMPOOL_HANDLER(handler_mp_sc);
+REGISTER_MEMPOOL_HANDLER(handler_sp_mc);
diff --git a/lib/librte_mempool/rte_mempool_internal.h b/lib/librte_mempool/rte_mempool_internal.h
new file mode 100644
index 0000000..982396f
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_internal.h
@@ -0,0 +1,75 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_MEMPOOL_INTERNAL_H_
+#define _RTE_MEMPOOL_INTERNAL_H_
+
+#include <rte_spinlock.h>
+#include <rte_mempool.h>
+
+#define RTE_MEMPOOL_MAX_HANDLER_IDX 16
+
+struct rte_mempool_handler {
+	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool handler */
+
+	rte_mempool_alloc_t alloc;
+
+	rte_mempool_get_count get_count;
+
+	rte_mempool_free_t free;
+
+	rte_mempool_put_t put;
+
+	rte_mempool_get_t get;
+} __rte_cache_aligned;
+
+struct rte_mempool_handler_list {
+	rte_spinlock_t sl;		  /**< Spinlock for add/delete. */
+
+	int32_t num_handlers;	  /**< Number of handlers that are valid. */
+
+	/* storage for all possible handlers */
+	struct rte_mempool_handler handler[RTE_MEMPOOL_MAX_HANDLER_IDX];
+};
+
+int16_t rte_mempool_register_handler(struct rte_mempool_handler *h);
+int16_t rte_get_mempool_handler_idx(const char *name);
+
+#define REGISTER_MEMPOOL_HANDLER(h) \
+static int16_t __attribute__((used)) testfn_##h(void);\
+int16_t __attribute__((constructor, used)) testfn_##h(void)\
+{\
+	return rte_mempool_register_handler(&h);\
+}
+
+#endif
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 17151e0..589db27 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -6,6 +6,7 @@ DPDK_2.0 {
 	rte_mempool_calc_obj_size;
 	rte_mempool_count;
 	rte_mempool_create;
+	rte_mempool_create_ext;
 	rte_mempool_dump;
 	rte_mempool_list_dump;
 	rte_mempool_lookup;
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH 2/6] mempool: add stack (lifo) based external mempool handler
  2016-02-16 14:48 ` [PATCH 0/6] " David Hunt
  2016-02-16 14:48   ` [PATCH 1/6] mempool: add external mempool manager support David Hunt
@ 2016-02-16 14:48   ` David Hunt
  2016-02-19 13:31     ` Olivier MATZ
  2016-02-16 14:48   ` [PATCH 3/6] mempool: adds a simple ring-based mempool handler using mallocs for objects David Hunt
                     ` (5 subsequent siblings)
  7 siblings, 1 reply; 238+ messages in thread
From: David Hunt @ 2016-02-16 14:48 UTC (permalink / raw)
  To: dev

adds a simple stack based mempool handler

Signed-off-by: David Hunt <david.hunt@intel.com>
---
 lib/librte_mempool/Makefile            |   2 +-
 lib/librte_mempool/rte_mempool.c       |   4 +-
 lib/librte_mempool/rte_mempool.h       |   1 +
 lib/librte_mempool/rte_mempool_stack.c | 164 +++++++++++++++++++++++++++++++++
 4 files changed, 169 insertions(+), 2 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_stack.c

diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index aeaffd1..d795b48 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -43,7 +43,7 @@ LIBABIVER := 1
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
-
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_stack.c
 ifeq ($(CONFIG_RTE_LIBRTE_XEN_DOM0),y)
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_dom0_mempool.c
 endif
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index a577a3e..44bc92f 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -672,7 +672,9 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 	 * examine the
 	 * flags to set the correct index into the handler table.
 	 */
-	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
+	if (flags & MEMPOOL_F_USE_STACK)
+		sprintf(handler_name, "%s", "stack");
+	else if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
 		sprintf(handler_name, "%s", "ring_sp_sc");
 	else if (flags & MEMPOOL_F_SP_PUT)
 		sprintf(handler_name, "%s", "ring_sp_mc");
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 3705fbd..8d8201f 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -303,6 +303,7 @@ struct rte_mempool {
 #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
 #define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
+#define MEMPOOL_F_USE_STACK      0x0010 /**< Use a stack for the common pool. */
 #define MEMPOOL_F_INT_HANDLER    0x0020 /**< Using internal mempool handler */
 
 
diff --git a/lib/librte_mempool/rte_mempool_stack.c b/lib/librte_mempool/rte_mempool_stack.c
new file mode 100644
index 0000000..d341793
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_stack.c
@@ -0,0 +1,164 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <string.h>
+
+#include "rte_mempool_internal.h"
+
+struct rte_mempool_common_stack {
+	/* Spinlock to protect access */
+	rte_spinlock_t sl;
+
+	uint32_t size;
+	uint32_t len;
+	void *objs[];
+};
+
+static void *
+common_stack_alloc(struct rte_mempool *mp,
+		const char *name, unsigned n, int socket_id, unsigned flags)
+{
+	struct rte_mempool_common_stack *s;
+	char stack_name[RTE_RING_NAMESIZE];
+
+	int size = sizeof(*s) + (n+16)*sizeof(void *);
+
+	flags = flags;
+
+	/* Allocate our local memory structure */
+	snprintf(stack_name, sizeof(stack_name), "%s-common-stack", name);
+	s = rte_zmalloc_socket(stack_name,
+					size, RTE_CACHE_LINE_SIZE, socket_id);
+	if (s == NULL) {
+		RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n");
+		return NULL;
+	}
+
+	/* And the spinlock we use to protect access */
+	rte_spinlock_init(&s->sl);
+
+	s->size = n;
+	mp->rt_pool = (void *) s;
+	mp->handler_idx = rte_get_mempool_handler_idx("stack");
+
+	return s;
+}
+
+static int common_stack_put(void *p, void * const *obj_table,
+		unsigned n)
+{
+	struct rte_mempool_common_stack *s =
+		(struct rte_mempool_common_stack *)p;
+	void **cache_objs;
+	unsigned index;
+
+	/* Acquire lock */
+	rte_spinlock_lock(&s->sl);
+	cache_objs = &s->objs[s->len];
+
+	/* Is there sufficient space in the stack ? */
+	if ((s->len + n) > s->size) {
+		rte_spinlock_unlock(&s->sl);
+		return -ENOENT;
+	}
+
+	/* Add elements back into the cache */
+	for (index = 0; index < n; ++index, obj_table++)
+		cache_objs[index] = *obj_table;
+
+	s->len += n;
+
+	rte_spinlock_unlock(&s->sl);
+	return 0;
+}
+
+static int common_stack_get(void *p, void **obj_table,
+		unsigned n)
+{
+	struct rte_mempool_common_stack *s =
+		(struct rte_mempool_common_stack *)p;
+	void **cache_objs;
+	unsigned index, len;
+
+	/* Acquire lock */
+	rte_spinlock_lock(&s->sl);
+
+	if (unlikely(n > s->len)) {
+		rte_spinlock_unlock(&s->sl);
+		return -ENOENT;
+	}
+
+	cache_objs = s->objs;
+
+	for (index = 0, len = s->len - 1; index < n;
+					++index, len--, obj_table++)
+		*obj_table = cache_objs[len];
+
+	s->len -= n;
+	rte_spinlock_unlock(&s->sl);
+	return n;
+}
+
+static unsigned common_stack_get_count(void *p)
+{
+	struct rte_mempool_common_stack *s =
+		(struct rte_mempool_common_stack *)p;
+
+	return s->len;
+}
+
+static int
+common_stack_free(struct rte_mempool *mp)
+{
+	struct rte_mempool_common_stack *s;
+
+	s = mp->rt_pool;
+
+	rte_free(s);
+
+	return 0;
+}
+
+static struct rte_mempool_handler handler_stack = {
+	.name = "stack",
+	.alloc = common_stack_alloc,
+	.put = common_stack_put,
+	.get = common_stack_get,
+	.get_count = common_stack_get_count,
+	.free = common_stack_free
+};
+
+REGISTER_MEMPOOL_HANDLER(handler_stack);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH 3/6] mempool: adds a simple ring-based mempool handler using mallocs for objects
  2016-02-16 14:48 ` [PATCH 0/6] " David Hunt
  2016-02-16 14:48   ` [PATCH 1/6] mempool: add external mempool manager support David Hunt
  2016-02-16 14:48   ` [PATCH 2/6] mempool: add stack (lifo) based external mempool handler David Hunt
@ 2016-02-16 14:48   ` David Hunt
  2016-02-16 14:48   ` [PATCH 4/6] mempool: add autotest for external mempool custom example David Hunt
                     ` (4 subsequent siblings)
  7 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-02-16 14:48 UTC (permalink / raw)
  To: dev

Signed-off-by: David Hunt <david.hunt@intel.com>
---
 lib/librte_mempool/Makefile         |   1 +
 lib/librte_mempool/custom_mempool.c | 146 ++++++++++++++++++++++++++++++++++++
 2 files changed, 147 insertions(+)
 create mode 100644 lib/librte_mempool/custom_mempool.c

diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index d795b48..4f72546 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -44,6 +44,7 @@ LIBABIVER := 1
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  custom_mempool.c
 ifeq ($(CONFIG_RTE_LIBRTE_XEN_DOM0),y)
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_dom0_mempool.c
 endif
diff --git a/lib/librte_mempool/custom_mempool.c b/lib/librte_mempool/custom_mempool.c
new file mode 100644
index 0000000..5c85203
--- /dev/null
+++ b/lib/librte_mempool/custom_mempool.c
@@ -0,0 +1,146 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <sys/queue.h>
+
+#include <rte_mempool.h>
+
+#include "rte_mempool_internal.h"
+
+/*
+ * Mempool
+ * =======
+ *
+ * Basic tests: done on one core with and without cache:
+ *
+ *    - Get one object, put one object
+ *    - Get two objects, put two objects
+ *    - Get all objects, test that their content is not modified and
+ *      put them back in the pool.
+ */
+
+#define TIME_S 5
+#define MEMPOOL_ELT_SIZE 2048
+#define MAX_KEEP 128
+#define MEMPOOL_SIZE 8192
+
+/*
+ * Simple example of custom mempool structure. Holds pointers to all the
+ * elements which are simply malloc'd in this example.
+ */
+struct custom_mempool {
+	struct rte_ring *r;             /* Ring to manage elements */
+	void *elements[MEMPOOL_SIZE];   /* Element pointers */
+};
+
+/*
+ * Loop though all the element pointers and allocate a chunk of memory, then
+ * insert that memory into the ring.
+ */
+static void *
+custom_mempool_alloc(struct rte_mempool *mp,
+		const char *name, unsigned n,
+		__attribute__((unused)) int socket_id,
+		__attribute__((unused)) unsigned flags)
+
+{
+	static struct custom_mempool *cm;
+	uint32_t *objnum;
+	unsigned int i;
+
+	cm = malloc(sizeof(struct custom_mempool));
+
+	/* Create the ring so we can enqueue/dequeue */
+	cm->r = rte_ring_create(name,
+						rte_align32pow2(n+1), 0, 0);
+	if (cm->r == NULL)
+		return NULL;
+
+	/*
+	 * Loop around the elements an allocate the required memory
+	 * and place them in the ring.
+	 * Not worried about alignment or performance for this example.
+	 * Also, set the first 32-bits to be the element number so we
+	 * can check later on.
+	 */
+	for (i = 0; i < n; i++) {
+		cm->elements[i] = malloc(mp->elt_size);
+		memset(cm->elements[i], 0, mp->elt_size);
+		objnum = (uint32_t *)cm->elements[i];
+		*objnum = i;
+		rte_ring_sp_enqueue_bulk(cm->r, &(cm->elements[i]), 1);
+	}
+
+	return cm;
+}
+
+static int
+custom_mempool_put(void *p, void * const *obj_table, unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)p;
+
+	return rte_ring_mp_enqueue_bulk(cm->r, obj_table, n);
+}
+
+static int
+custom_mempool_get(void *p, void **obj_table, unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)p;
+
+	return rte_ring_mc_dequeue_bulk(cm->r, obj_table, n);
+}
+
+static unsigned
+custom_mempool_get_count(void *p)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)p;
+
+	return rte_ring_count(cm->r);
+}
+
+static struct rte_mempool_handler mempool_handler_custom = {
+	.name = "custom_handler",
+	.alloc = custom_mempool_alloc,
+	.put = custom_mempool_put,
+	.get = custom_mempool_get,
+	.get_count = custom_mempool_get_count,
+};
+
+REGISTER_MEMPOOL_HANDLER(mempool_handler_custom);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH 4/6] mempool: add autotest for external mempool custom example
  2016-02-16 14:48 ` [PATCH 0/6] " David Hunt
                     ` (2 preceding siblings ...)
  2016-02-16 14:48   ` [PATCH 3/6] mempool: adds a simple ring-based mempool handler using mallocs for objects David Hunt
@ 2016-02-16 14:48   ` David Hunt
  2016-02-16 14:48   ` [PATCH 5/6] mempool: allow rte_pktmbuf_pool_create switch between memool handlers David Hunt
                     ` (3 subsequent siblings)
  7 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-02-16 14:48 UTC (permalink / raw)
  To: dev

Signed-off-by: David Hunt <david.hunt@intel.com>
---
 app/test/Makefile           |   1 +
 app/test/test_ext_mempool.c | 451 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 452 insertions(+)
 create mode 100644 app/test/test_ext_mempool.c

diff --git a/app/test/Makefile b/app/test/Makefile
index ec33e1a..9a2f75f 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -74,6 +74,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
 SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_racecond.c
 
 SRCS-y += test_mempool.c
+SRCS-y += test_ext_mempool.c
 SRCS-y += test_mempool_perf.c
 
 SRCS-y += test_mbuf.c
diff --git a/app/test/test_ext_mempool.c b/app/test/test_ext_mempool.c
new file mode 100644
index 0000000..6beada0
--- /dev/null
+++ b/app/test/test_ext_mempool.c
@@ -0,0 +1,451 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <sys/queue.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_launch.h>
+#include <rte_cycles.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_spinlock.h>
+#include <rte_malloc.h>
+
+#include "test.h"
+
+/*
+ * Mempool
+ * =======
+ *
+ * Basic tests: done on one core with and without cache:
+ *
+ *    - Get one object, put one object
+ *    - Get two objects, put two objects
+ *    - Get all objects, test that their content is not modified and
+ *      put them back in the pool.
+ */
+
+#define TIME_S 5
+#define MEMPOOL_ELT_SIZE 2048
+#define MAX_KEEP 128
+#define MEMPOOL_SIZE 8192
+
+static struct rte_mempool *mp;
+static struct rte_mempool *ext_nocache, *ext_cache;
+
+static rte_atomic32_t synchro;
+
+/*
+ * For our tests, we use the following struct to pass info to our create
+ *  callback so it can call rte_mempool_create
+ */
+struct custom_mempool_alloc_params {
+	char ring_name[RTE_RING_NAMESIZE];
+	unsigned n_elt;
+	unsigned elt_size;
+};
+
+/*
+ * Simple example of custom mempool structure. Holds pointers to all the
+ * elements which are simply malloc'd in this example.
+ */
+struct custom_mempool {
+	struct rte_ring *r;            /* Ring to manage elements */
+	void *elements[MEMPOOL_SIZE];  /* Element pointers */
+};
+
+/*
+ * save the object number in the first 4 bytes of object data. All
+ * other bytes are set to 0.
+ */
+static void
+my_obj_init(struct rte_mempool *mp, __attribute__((unused)) void *arg,
+		void *obj, unsigned i)
+{
+	uint32_t *objnum = obj;
+
+	memset(obj, 0, mp->elt_size);
+	*objnum = i;
+	printf("Setting objnum to %d\n", i);
+}
+
+/* basic tests (done on one core) */
+static int
+test_mempool_basic(void)
+{
+	uint32_t *objnum;
+	void **objtable;
+	void *obj, *obj2;
+	char *obj_data;
+	int ret = 0;
+	unsigned i, j;
+
+	/* dump the mempool status */
+	rte_mempool_dump(stdout, mp);
+
+	printf("Count = %d\n", rte_mempool_count(mp));
+	printf("get an object\n");
+	if (rte_mempool_get(mp, &obj) < 0) {
+		printf("get Failed\n");
+		return -1;
+	}
+	printf("Count = %d\n", rte_mempool_count(mp));
+	rte_mempool_dump(stdout, mp);
+
+	/* tests that improve coverage */
+	printf("get object count\n");
+	if (rte_mempool_count(mp) != MEMPOOL_SIZE - 1)
+		return -1;
+
+	printf("get private data\n");
+	if (rte_mempool_get_priv(mp) !=
+			(char *) mp + MEMPOOL_HEADER_SIZE(mp, mp->pg_num))
+		return -1;
+
+	printf("get physical address of an object\n");
+	if (MEMPOOL_IS_CONTIG(mp) &&
+			rte_mempool_virt2phy(mp, obj) !=
+			(phys_addr_t) (mp->phys_addr +
+			(phys_addr_t) ((char *) obj - (char *) mp)))
+		return -1;
+
+	printf("put the object back\n");
+	rte_mempool_put(mp, obj);
+	rte_mempool_dump(stdout, mp);
+
+	printf("get 2 objects\n");
+	if (rte_mempool_get(mp, &obj) < 0)
+		return -1;
+	if (rte_mempool_get(mp, &obj2) < 0) {
+		rte_mempool_put(mp, obj);
+		return -1;
+	}
+	rte_mempool_dump(stdout, mp);
+
+	printf("put the objects back\n");
+	rte_mempool_put(mp, obj);
+	rte_mempool_put(mp, obj2);
+	rte_mempool_dump(stdout, mp);
+
+	/*
+	 * get many objects: we cannot get them all because the cache
+	 * on other cores may not be empty.
+	 */
+	objtable = malloc(MEMPOOL_SIZE * sizeof(void *));
+	if (objtable == NULL)
+		return -1;
+
+	for (i = 0; i < MEMPOOL_SIZE; i++) {
+		if (rte_mempool_get(mp, &objtable[i]) < 0)
+			break;
+	}
+
+	/*
+	 * for each object, check that its content was not modified,
+	 * and put objects back in pool
+	 */
+	while (i--) {
+		obj = objtable[i];
+		obj_data = obj;
+		objnum = obj;
+		if (*objnum > MEMPOOL_SIZE) {
+			printf("bad object number(%d)\n", *objnum);
+			ret = -1;
+			break;
+		}
+		for (j = sizeof(*objnum); j < mp->elt_size; j++) {
+			if (obj_data[j] != 0)
+				ret = -1;
+		}
+
+		rte_mempool_put(mp, objtable[i]);
+	}
+
+	free(objtable);
+	if (ret == -1)
+		printf("objects were modified!\n");
+
+	return ret;
+}
+
+static int test_mempool_creation_with_exceeded_cache_size(void)
+{
+	struct rte_mempool *mp_cov;
+
+	mp_cov = rte_mempool_create("test_mempool_creation_exceeded_cache_size",
+						MEMPOOL_SIZE,
+						MEMPOOL_ELT_SIZE,
+						RTE_MEMPOOL_CACHE_MAX_SIZE + 32,
+						0,
+						NULL, NULL,
+						my_obj_init, NULL,
+						SOCKET_ID_ANY, 0);
+	if (NULL != mp_cov)
+		return -1;
+
+	return 0;
+}
+
+/*
+ * it tests some more basic of mempool
+ */
+static int
+test_mempool_basic_ex(struct rte_mempool *mp)
+{
+	unsigned i;
+	void **obj;
+	void *err_obj;
+	int ret = -1;
+
+	if (mp == NULL)
+		return ret;
+
+	obj = rte_calloc("test_mempool_basic_ex",
+					MEMPOOL_SIZE , sizeof(void *), 0);
+	if (obj == NULL) {
+		printf("test_mempool_basic_ex fail to rte_malloc\n");
+		return ret;
+	}
+	printf("test_mempool_basic_ex now mempool (%s) has %u free entries\n",
+					mp->name, rte_mempool_free_count(mp));
+	if (rte_mempool_full(mp) != 1) {
+		printf("test_mempool_basic_ex the mempool should be full\n");
+		goto fail_mp_basic_ex;
+	}
+
+	for (i = 0; i < MEMPOOL_SIZE; i++) {
+		if (rte_mempool_mc_get(mp, &obj[i]) < 0) {
+			printf("test_mp_basic_ex fail to get object for [%u]\n",
+					i);
+			goto fail_mp_basic_ex;
+		}
+	}
+	if (rte_mempool_mc_get(mp, &err_obj) == 0) {
+		printf("test_mempool_basic_ex get an impossible obj\n");
+		goto fail_mp_basic_ex;
+	}
+	printf("number: %u\n", i);
+	if (rte_mempool_empty(mp) != 1) {
+		printf("test_mempool_basic_ex the mempool should be empty\n");
+		goto fail_mp_basic_ex;
+	}
+
+	for (i = 0; i < MEMPOOL_SIZE; i++)
+		rte_mempool_mp_put(mp, obj[i]);
+
+	if (rte_mempool_full(mp) != 1) {
+		printf("test_mempool_basic_ex the mempool should be full\n");
+		goto fail_mp_basic_ex;
+	}
+
+	ret = 0;
+
+fail_mp_basic_ex:
+	if (obj != NULL)
+		rte_free((void *)obj);
+
+	return ret;
+}
+
+static int
+test_mempool_same_name_twice_creation(void)
+{
+	struct rte_mempool *mp_tc;
+
+	mp_tc = rte_mempool_create("test_mempool_same_name_twice_creation",
+						MEMPOOL_SIZE,
+						MEMPOOL_ELT_SIZE, 0, 0,
+						NULL, NULL,
+						NULL, NULL,
+						SOCKET_ID_ANY, 0);
+	if (NULL == mp_tc)
+		return -1;
+
+	mp_tc = rte_mempool_create("test_mempool_same_name_twice_creation",
+						MEMPOOL_SIZE,
+						MEMPOOL_ELT_SIZE, 0, 0,
+						NULL, NULL,
+						NULL, NULL,
+						SOCKET_ID_ANY, 0);
+	if (NULL != mp_tc)
+		return -1;
+
+	return 0;
+}
+
+/*
+ * BAsic test for mempool_xmem functions.
+ */
+static int
+test_mempool_xmem_misc(void)
+{
+	uint32_t elt_num, total_size;
+	size_t sz;
+	ssize_t usz;
+
+	elt_num = MAX_KEEP;
+	total_size = rte_mempool_calc_obj_size(MEMPOOL_ELT_SIZE, 0, NULL);
+	sz = rte_mempool_xmem_size(elt_num, total_size, MEMPOOL_PG_SHIFT_MAX);
+
+	usz = rte_mempool_xmem_usage(NULL, elt_num, total_size, 0, 1,
+		MEMPOOL_PG_SHIFT_MAX);
+
+	if (sz != (size_t)usz)  {
+		printf("failure @ %s: rte_mempool_xmem_usage(%u, %u) "
+			"returns: %#zx, while expected: %#zx;\n",
+			__func__, elt_num, total_size, sz, (size_t)usz);
+		return (-1);
+	}
+
+	return 0;
+}
+
+
+
+static int
+test_ext_mempool(void)
+{
+	rte_atomic32_init(&synchro);
+
+	/* create an external mempool (without cache) */
+	if (ext_nocache == NULL)
+		ext_nocache = rte_mempool_create_ext(
+				"ext_nocache",         /* Name */
+				MEMPOOL_SIZE,          /* Number of Elements */
+				MEMPOOL_ELT_SIZE,      /* Element size */
+				0,                     /* Cache Size */
+				0,                     /* Private Data size */
+				NULL, NULL, NULL, NULL,
+				0,                     /* socket_id */
+				0,                     /* flags */
+				"custom_handler"
+				);
+	if (ext_nocache == NULL)
+		return -1;
+
+	/* create an external mempool (with cache) */
+	if (ext_cache == NULL)
+		ext_cache = rte_mempool_create_ext(
+				"ext_cache",           /* Name */
+				MEMPOOL_SIZE,          /* Number of Elements */
+				MEMPOOL_ELT_SIZE,      /* Element size */
+				16,                    /* Cache Size */
+				0,                     /* Private Data size */
+				NULL, NULL, NULL, NULL,
+				0,                     /* socket_id */
+				0,                     /* flags */
+				"custom_handler"
+				);
+	if (ext_cache == NULL)
+		return -1;
+
+	if (rte_mempool_get_handler_name(ext_nocache)) {
+		printf("Handler name is \"%s\"\n",
+			rte_mempool_get_handler_name(ext_nocache));
+	} else {
+		printf("Cannot lookup mempool handler name\n");
+		return -1;
+	}
+
+	if (rte_mempool_get_handler_name(ext_cache))
+		printf("Handler name is \"%s\"\n",
+			rte_mempool_get_handler_name(ext_cache));
+	else {
+		printf("Cannot lookup mempool handler name\n");
+		return -1;
+	}
+
+	/* retrieve the mempool from its name */
+	if (rte_mempool_lookup("ext_nocache") != ext_nocache) {
+		printf("Cannot lookup mempool from its name\n");
+		return -1;
+	}
+	/* retrieve the mempool from its name */
+	if (rte_mempool_lookup("ext_cache") != ext_cache) {
+		printf("Cannot lookup mempool from its name\n");
+		return -1;
+	}
+
+	rte_mempool_list_dump(stdout);
+
+	printf("Running basic tests\n");
+	/* basic tests without cache */
+	mp = ext_nocache;
+	if (test_mempool_basic() < 0)
+		return -1;
+
+	/* basic tests with cache */
+	mp = ext_cache;
+	if (test_mempool_basic() < 0)
+		return -1;
+
+	/* more basic tests without cache */
+	if (test_mempool_basic_ex(ext_nocache) < 0)
+		return -1;
+
+	if (test_mempool_creation_with_exceeded_cache_size() < 0)
+		return -1;
+
+	if (test_mempool_same_name_twice_creation() < 0)
+		return -1;
+	return 0;
+
+	if (test_mempool_xmem_misc() < 0)
+		return -1;
+
+	rte_mempool_list_dump(stdout);
+
+	return 0;
+}
+
+static struct test_command mempool_cmd = {
+	.command = "ext_mempool_autotest",
+	.callback = test_ext_mempool,
+};
+REGISTER_TEST_COMMAND(mempool_cmd);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH 5/6] mempool: allow rte_pktmbuf_pool_create switch between memool handlers
  2016-02-16 14:48 ` [PATCH 0/6] " David Hunt
                     ` (3 preceding siblings ...)
  2016-02-16 14:48   ` [PATCH 4/6] mempool: add autotest for external mempool custom example David Hunt
@ 2016-02-16 14:48   ` David Hunt
  2016-02-16 14:48   ` [PATCH 6/6] mempool: add in the RTE_NEXT_ABI protection for ABI breakages David Hunt
                     ` (2 subsequent siblings)
  7 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-02-16 14:48 UTC (permalink / raw)
  To: dev

v2 changes: added to linux and bsd config files:
If the user wants to have rte_pktmbuf_pool_create() use an external mempool
handler, they define RTE_MEMPOOL_HANDLER_NAME to be the name of the
mempool handler they wish to use, and change RTE_MEMPOOL_HANDLER_EXT to 'y'
Applies to both linux and bsd config files

Signed-off-by: David Hunt <david.hunt@intel.com>
---
 config/common_bsdapp       | 2 ++
 config/common_linuxapp     | 2 ++
 lib/librte_mbuf/rte_mbuf.c | 8 ++++++++
 3 files changed, 12 insertions(+)

diff --git a/config/common_bsdapp b/config/common_bsdapp
index 696382c..e0c812a 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -347,6 +347,8 @@ CONFIG_RTE_RING_PAUSE_REP_COUNT=0
 CONFIG_RTE_LIBRTE_MEMPOOL=y
 CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE=512
 CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
+CONFIG_RTE_MEMPOOL_HANDLER_EXT=n
+CONFIG_RTE_MEMPOOL_HANDLER_NAME="custom_handler"
 
 #
 # Compile librte_mbuf
diff --git a/config/common_linuxapp b/config/common_linuxapp
index f1638db..9aa62ca 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -363,6 +363,8 @@ CONFIG_RTE_RING_PAUSE_REP_COUNT=0
 CONFIG_RTE_LIBRTE_MEMPOOL=y
 CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE=512
 CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
+CONFIG_RTE_MEMPOOL_HANDLER_EXT=n
+CONFIG_RTE_MEMPOOL_HANDLER_NAME="custom_handler"
 
 #
 # Compile librte_mbuf
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index c18b438..42b0cd1 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -167,10 +167,18 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	mbp_priv.mbuf_data_room_size = data_room_size;
 	mbp_priv.mbuf_priv_size = priv_size;
 
+#ifdef RTE_MEMPOOL_HANDLER_EXT
+	return rte_mempool_create_ext(name, n, elt_size,
+		cache_size, sizeof(struct rte_pktmbuf_pool_private),
+		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
+		socket_id, 0,
+		RTE_MEMPOOL_HANDLER_NAME);
+#else
 	return rte_mempool_create(name, n, elt_size,
 		cache_size, sizeof(struct rte_pktmbuf_pool_private),
 		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
 		socket_id, 0);
+#endif
 }
 
 /* do some sanity checks on a mbuf: panic if it fails */
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH 6/6] mempool: add in the RTE_NEXT_ABI protection for ABI breakages
  2016-02-16 14:48 ` [PATCH 0/6] " David Hunt
                     ` (4 preceding siblings ...)
  2016-02-16 14:48   ` [PATCH 5/6] mempool: allow rte_pktmbuf_pool_create switch between memool handlers David Hunt
@ 2016-02-16 14:48   ` David Hunt
  2016-02-19 13:33     ` Olivier MATZ
  2016-02-19 13:25   ` [PATCH 0/6] external mempool manager Olivier MATZ
  2016-03-09  9:50   ` [PATCH v3 0/4] " David Hunt
  7 siblings, 1 reply; 238+ messages in thread
From: David Hunt @ 2016-02-16 14:48 UTC (permalink / raw)
  To: dev

v2: Kept all the NEXT_ABI defs to this patch so as to make the
previous patches easier to read, and also to imake it clear what
code is necessary to keep ABI compatibility when NEXT_ABI is
disabled.

Signed-off-by: David Hunt <david.hunt@intel.com>
---
 app/test/Makefile                |   2 +
 app/test/test_mempool_perf.c     |   3 +
 lib/librte_mbuf/rte_mbuf.c       |   7 ++
 lib/librte_mempool/Makefile      |   2 +
 lib/librte_mempool/rte_mempool.c | 240 ++++++++++++++++++++++++++++++++++++++-
 lib/librte_mempool/rte_mempool.h |  68 ++++++++++-
 6 files changed, 320 insertions(+), 2 deletions(-)

diff --git a/app/test/Makefile b/app/test/Makefile
index 9a2f75f..8fcf0c2 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -74,7 +74,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
 SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_racecond.c
 
 SRCS-y += test_mempool.c
+ifeq ($(CONFIG_RTE_NEXT_ABI),y)
 SRCS-y += test_ext_mempool.c
+endif
 SRCS-y += test_mempool_perf.c
 
 SRCS-y += test_mbuf.c
diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index 091c1df..ca69e49 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -161,6 +161,9 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
 							   n_get_bulk);
 				if (unlikely(ret < 0)) {
 					rte_mempool_dump(stdout, mp);
+#ifndef RTE_NEXT_ABI
+					rte_ring_dump(stdout, mp->ring);
+#endif
 					/* in this case, objects are lost... */
 					return -1;
 				}
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 42b0cd1..967d987 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -167,6 +167,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	mbp_priv.mbuf_data_room_size = data_room_size;
 	mbp_priv.mbuf_priv_size = priv_size;
 
+#ifdef RTE_NEXT_ABI
 #ifdef RTE_MEMPOOL_HANDLER_EXT
 	return rte_mempool_create_ext(name, n, elt_size,
 		cache_size, sizeof(struct rte_pktmbuf_pool_private),
@@ -179,6 +180,12 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
 		socket_id, 0);
 #endif
+#else
+	return rte_mempool_create(name, n, elt_size,
+		cache_size, sizeof(struct rte_pktmbuf_pool_private),
+		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
+		socket_id, 0);
+#endif
 }
 
 /* do some sanity checks on a mbuf: panic if it fails */
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 4f72546..8038785 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -42,9 +42,11 @@ LIBABIVER := 1
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
+ifeq ($(CONFIG_RTE_NEXT_ABI),y)
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_stack.c
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  custom_mempool.c
+endif
 ifeq ($(CONFIG_RTE_LIBRTE_XEN_DOM0),y)
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_dom0_mempool.c
 endif
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 44bc92f..53e44ff 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -59,7 +59,9 @@
 #include <rte_spinlock.h>
 
 #include "rte_mempool.h"
+#ifdef RTE_NEXT_ABI
 #include "rte_mempool_internal.h"
+#endif
 
 TAILQ_HEAD(rte_mempool_list, rte_tailq_entry);
 
@@ -400,6 +402,7 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
 					       MEMPOOL_PG_SHIFT_MAX);
 }
 
+#ifdef RTE_NEXT_ABI
 /*
  * Common mempool create function.
  * Create the mempool over already allocated chunk of memory.
@@ -698,6 +701,229 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 
 	return mp;
 }
+#else
+/*
+ * Create the mempool over already allocated chunk of memory.
+ * That external memory buffer can consists of physically disjoint pages.
+ * Setting vaddr to NULL, makes mempool to fallback to original behaviour
+ * and allocate space for mempool and it's elements as one big chunk of
+ * physically continuos memory.
+ * */
+struct rte_mempool *
+rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
+		unsigned cache_size, unsigned private_data_size,
+		rte_mempool_ctor_t *mp_init, void *mp_init_arg,
+		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
+		int socket_id, unsigned flags, void *vaddr,
+		const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift)
+{
+	char mz_name[RTE_MEMZONE_NAMESIZE];
+	char rg_name[RTE_RING_NAMESIZE];
+	struct rte_mempool_list *mempool_list;
+	struct rte_mempool *mp = NULL;
+	struct rte_tailq_entry *te;
+	struct rte_ring *r;
+	const struct rte_memzone *mz;
+	size_t mempool_size;
+	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
+	int rg_flags = 0;
+	void *obj;
+	struct rte_mempool_objsz objsz;
+	void *startaddr;
+	int page_size = getpagesize();
+
+	/* compilation-time checks */
+	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool) &
+			  RTE_CACHE_LINE_MASK) != 0);
+#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
+	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_cache) &
+			  RTE_CACHE_LINE_MASK) != 0);
+	RTE_BUILD_BUG_ON((offsetof(struct rte_mempool, local_cache) &
+			  RTE_CACHE_LINE_MASK) != 0);
+#endif
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_debug_stats) &
+			  RTE_CACHE_LINE_MASK) != 0);
+	RTE_BUILD_BUG_ON((offsetof(struct rte_mempool, stats) &
+			  RTE_CACHE_LINE_MASK) != 0);
+#endif
+
+	mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list);
+
+	/* asked cache too big */
+	if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE ||
+	    CALC_CACHE_FLUSHTHRESH(cache_size) > n) {
+		rte_errno = EINVAL;
+		return NULL;
+	}
+
+	/* check that we have both VA and PA */
+	if (vaddr != NULL && paddr == NULL) {
+		rte_errno = EINVAL;
+		return NULL;
+	}
+
+	/* Check that pg_num and pg_shift parameters are valid. */
+	if (pg_num < RTE_DIM(mp->elt_pa) || pg_shift > MEMPOOL_PG_SHIFT_MAX) {
+		rte_errno = EINVAL;
+		return NULL;
+	}
+
+	/* "no cache align" imply "no spread" */
+	if (flags & MEMPOOL_F_NO_CACHE_ALIGN)
+		flags |= MEMPOOL_F_NO_SPREAD;
+
+	/* ring flags */
+	if (flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	/* calculate mempool object sizes. */
+	if (!rte_mempool_calc_obj_size(elt_size, flags, &objsz)) {
+		rte_errno = EINVAL;
+		return NULL;
+	}
+
+	rte_rwlock_write_lock(RTE_EAL_MEMPOOL_RWLOCK);
+
+	/* allocate the ring that will be used to store objects */
+	/* Ring functions will return appropriate errors if we are
+	 * running as a secondary process etc., so no checks made
+	 * in this function for that condition */
+	snprintf(rg_name, sizeof(rg_name), RTE_MEMPOOL_MZ_FORMAT, name);
+	r = rte_ring_create(rg_name, rte_align32pow2(n+1), socket_id, rg_flags);
+	if (r == NULL)
+		goto exit;
+
+	/*
+	 * reserve a memory zone for this mempool: private data is
+	 * cache-aligned
+	 */
+	private_data_size = (private_data_size +
+		RTE_MEMPOOL_ALIGN_MASK) & (~RTE_MEMPOOL_ALIGN_MASK);
+
+	if (!rte_eal_has_hugepages()) {
+		/*
+		 * expand private data size to a whole page, so that the
+		 * first pool element will start on a new standard page
+		 */
+		int head = sizeof(struct rte_mempool);
+		int new_size = (private_data_size + head) % page_size;
+
+		if (new_size)
+			private_data_size += page_size - new_size;
+	}
+
+	/* try to allocate tailq entry */
+	te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0);
+	if (te == NULL) {
+		RTE_LOG(ERR, MEMPOOL, "Cannot allocate tailq entry!\n");
+		goto exit;
+	}
+
+	/*
+	 * If user provided an external memory buffer, then use it to
+	 * store mempool objects. Otherwise reserve a memzone that is large
+	 * enough to hold mempool header and metadata plus mempool objects.
+	 */
+	mempool_size = MEMPOOL_HEADER_SIZE(mp, pg_num) + private_data_size;
+	mempool_size = RTE_ALIGN_CEIL(mempool_size, RTE_MEMPOOL_ALIGN);
+	if (vaddr == NULL)
+		mempool_size += (size_t)objsz.total_size * n;
+
+	if (!rte_eal_has_hugepages()) {
+		/*
+		 * we want the memory pool to start on a page boundary,
+		 * because pool elements crossing page boundaries would
+		 * result in discontiguous physical addresses
+		 */
+		mempool_size += page_size;
+	}
+
+	snprintf(mz_name, sizeof(mz_name), RTE_MEMPOOL_MZ_FORMAT, name);
+
+	mz = rte_memzone_reserve(mz_name, mempool_size, socket_id, mz_flags);
+
+	/*
+	 * no more memory: in this case we loose previously reserved
+	 * space for the ring as we cannot free it
+	 */
+	if (mz == NULL) {
+		rte_free(te);
+		goto exit;
+	}
+
+	if (rte_eal_has_hugepages()) {
+		startaddr = (void *)mz->addr;
+	} else {
+		/* align memory pool start address on a page boundary */
+		unsigned long addr = (unsigned long)mz->addr;
+
+		if (addr & (page_size - 1)) {
+			addr += page_size;
+			addr &= ~(page_size - 1);
+		}
+		startaddr = (void *)addr;
+	}
+
+	/* init the mempool structure */
+	mp = startaddr;
+	memset(mp, 0, sizeof(*mp));
+	snprintf(mp->name, sizeof(mp->name), "%s", name);
+	mp->phys_addr = mz->phys_addr;
+	mp->ring = r;
+	mp->size = n;
+	mp->flags = flags;
+	mp->elt_size = objsz.elt_size;
+	mp->header_size = objsz.header_size;
+	mp->trailer_size = objsz.trailer_size;
+	mp->cache_size = cache_size;
+	mp->cache_flushthresh = CALC_CACHE_FLUSHTHRESH(cache_size);
+	mp->private_data_size = private_data_size;
+
+	/* calculate address of the first element for continuous mempool. */
+	obj = (char *)mp + MEMPOOL_HEADER_SIZE(mp, pg_num) +
+		private_data_size;
+	obj = RTE_PTR_ALIGN_CEIL(obj, RTE_MEMPOOL_ALIGN);
+
+	/* populate address translation fields. */
+	mp->pg_num = pg_num;
+	mp->pg_shift = pg_shift;
+	mp->pg_mask = RTE_LEN2MASK(mp->pg_shift, typeof(mp->pg_mask));
+
+	/* mempool elements allocated together with mempool */
+	if (vaddr == NULL) {
+		mp->elt_va_start = (uintptr_t)obj;
+		mp->elt_pa[0] = mp->phys_addr +
+			(mp->elt_va_start - (uintptr_t)mp);
+
+	/* mempool elements in a separate chunk of memory. */
+	} else {
+		mp->elt_va_start = (uintptr_t)vaddr;
+		memcpy(mp->elt_pa, paddr, sizeof(mp->elt_pa[0]) * pg_num);
+	}
+
+	mp->elt_va_end = mp->elt_va_start;
+
+	/* call the initializer */
+	if (mp_init)
+		mp_init(mp, mp_init_arg);
+
+	mempool_populate(mp, n, 1, obj_init, obj_init_arg);
+
+	te->data = (void *) mp;
+
+	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+	TAILQ_INSERT_TAIL(mempool_list, te, next);
+	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+exit:
+	rte_rwlock_write_unlock(RTE_EAL_MEMPOOL_RWLOCK);
+
+	return mp;
+}
+#endif
 
 /* Return the number of entries in the mempool */
 unsigned
@@ -705,7 +931,11 @@ rte_mempool_count(const struct rte_mempool *mp)
 {
 	unsigned count;
 
+#ifdef RTE_NEXT_ABI
 	count = rte_mempool_ext_get_count(mp);
+#else
+	count = rte_ring_count(mp->ring);
+#endif
 
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
 	{
@@ -861,6 +1091,9 @@ rte_mempool_dump(FILE *f, const struct rte_mempool *mp)
 
 	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
 	fprintf(f, "  flags=%x\n", mp->flags);
+#ifndef RTE_NEXT_ABI
+	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
+#endif
 	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->phys_addr);
 	fprintf(f, "  size=%"PRIu32"\n", mp->size);
 	fprintf(f, "  header_size=%"PRIu32"\n", mp->header_size);
@@ -883,7 +1116,11 @@ rte_mempool_dump(FILE *f, const struct rte_mempool *mp)
 			mp->size);
 
 	cache_count = rte_mempool_dump_cache(f, mp);
+#ifdef RTE_NEXT_ABI
 	common_count = rte_mempool_ext_get_count(mp);
+#else
+	common_count = rte_ring_count(mp->ring);
+#endif
 	if ((cache_count + common_count) > mp->size)
 		common_count = mp->size - cache_count;
 	fprintf(f, "  common_pool_count=%u\n", common_count);
@@ -978,7 +1215,7 @@ void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *),
 	rte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK);
 }
 
-
+#ifdef RTE_NEXT_ABI
 /* create the mempool using an external mempool manager */
 struct rte_mempool *
 rte_mempool_create_ext(const char *name, unsigned n, unsigned elt_size,
@@ -1004,3 +1241,4 @@ rte_mempool_create_ext(const char *name, unsigned n, unsigned elt_size,
 
 
 }
+#endif
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 8d8201f..e676296 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -88,8 +88,10 @@ extern "C" {
 struct rte_mempool_debug_stats {
 	uint64_t put_bulk;         /**< Number of puts. */
 	uint64_t put_objs;         /**< Number of objects successfully put. */
+#ifdef RTE_NEXT_ABI
 	uint64_t put_pool_bulk;    /**< Number of puts into pool. */
 	uint64_t put_pool_objs;    /**< Number of objects into pool. */
+#endif
 	uint64_t get_success_bulk; /**< Successful allocation number. */
 	uint64_t get_success_objs; /**< Objects successfully allocated. */
 	uint64_t get_fail_bulk;    /**< Failed allocation number. */
@@ -177,6 +179,7 @@ struct rte_mempool_objtlr {
 #endif
 };
 
+#ifdef RTE_NEXT_ABI
 /* Handler functions for external mempool support */
 typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp,
 		const char *name, unsigned n, int socket_id, unsigned flags);
@@ -250,12 +253,16 @@ rte_mempool_ext_get_count(const struct rte_mempool *mp);
  */
 int
 rte_mempool_ext_free(struct rte_mempool *mp);
+#endif
 
 /**
  * The RTE mempool structure.
  */
 struct rte_mempool {
 	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
+#ifndef RTE_NEXT_ABI
+	struct rte_ring *ring;           /**< Ring to store objects. */
+#endif
 	phys_addr_t phys_addr;           /**< Phys. addr. of mempool struct. */
 	int flags;                       /**< Flags of the mempool. */
 	uint32_t size;                   /**< Size of the mempool. */
@@ -269,10 +276,12 @@ struct rte_mempool {
 
 	unsigned private_data_size;      /**< Size of private data. */
 
+#ifdef RTE_NEXT_ABI
 	/* Common pool data structure pointer */
 	void *rt_pool;
 
 	int16_t handler_idx;
+#endif
 
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
 	/** Per-lcore local cache. */
@@ -303,9 +312,10 @@ struct rte_mempool {
 #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
 #define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
+#ifdef RTE_NEXT_ABI
 #define MEMPOOL_F_USE_STACK      0x0010 /**< Use a stack for the common pool. */
 #define MEMPOOL_F_INT_HANDLER    0x0020 /**< Using internal mempool handler */
-
+#endif
 
 /**
  * @internal When debug is enabled, store some statistics.
@@ -836,7 +846,11 @@ void rte_mempool_dump(FILE *f, const struct rte_mempool *mp);
  */
 static inline void __attribute__((always_inline))
 __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
+#ifdef RTE_NEXT_ABI
 		    unsigned n, __attribute__((unused)) int is_mp)
+#else
+		    unsigned n, int is_mp)
+#endif
 {
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
 	struct rte_mempool_cache *cache;
@@ -852,7 +866,12 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
 	/* cache is not enabled or single producer or non-EAL thread */
+#ifdef RTE_NEXT_ABI
 	if (unlikely(cache_size == 0 || lcore_id >= RTE_MAX_LCORE))
+#else
+	if (unlikely(cache_size == 0 || is_mp == 0 ||
+		     lcore_id >= RTE_MAX_LCORE))
+#endif
 		goto ring_enqueue;
 
 	/* Go straight to ring if put would overflow mem allocated for cache */
@@ -875,9 +894,15 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 
 	cache->len += n;
 
+#ifdef RTE_NEXT_ABI
 	if (unlikely(cache->len >= flushthresh)) {
 		rte_mempool_ext_put_bulk(mp, &cache->objs[cache_size],
 				cache->len - cache_size);
+#else
+	if (cache->len >= flushthresh) {
+		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
+				cache->len - cache_size);
+#endif
 		cache->len = cache_size;
 	}
 
@@ -886,10 +911,28 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 ring_enqueue:
 #endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
 
+#ifdef RTE_NEXT_ABI
 	/* Increment stats counter to tell us how many pool puts happened */
 	__MEMPOOL_STAT_ADD(mp, put_pool, n);
 
 	rte_mempool_ext_put_bulk(mp, obj_table, n);
+#else
+	/* push remaining objects in ring */
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	if (is_mp) {
+		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
+			rte_panic("cannot put objects in mempool\n");
+	} else {
+		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
+			rte_panic("cannot put objects in mempool\n");
+	}
+#else
+	if (is_mp)
+		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
+	else
+		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
+#endif
+#endif
 }
 
 
@@ -1013,7 +1056,11 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
  */
 static inline int __attribute__((always_inline))
 __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
+#ifdef RTE_NEXT_ABI
 		   unsigned n, __attribute__((unused))int is_mc)
+#else
+		   unsigned n, int is_mc)
+#endif
 {
 	int ret;
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
@@ -1024,8 +1071,13 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 	uint32_t cache_size = mp->cache_size;
 
 	/* cache is not enabled or single consumer */
+#ifdef RTE_NEXT_ABI
 	if (unlikely(cache_size == 0 || n >= cache_size ||
 						lcore_id >= RTE_MAX_LCORE))
+#else
+	if (unlikely(cache_size == 0 || is_mc == 0 ||
+		     n >= cache_size || lcore_id >= RTE_MAX_LCORE))
+#endif
 		goto ring_dequeue;
 
 	cache = &mp->local_cache[lcore_id];
@@ -1037,8 +1089,13 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 		uint32_t req = n + (cache_size - cache->len);
 
 		/* How many do we require i.e. number to fill the cache + the request */
+#ifdef RTE_NEXT_ABI
 		ret = rte_mempool_ext_get_bulk(mp,
 						&cache->objs[cache->len], req);
+#else
+		ret = rte_ring_mc_dequeue_bulk(mp->ring,
+						&cache->objs[cache->len], req);
+#endif
 		if (unlikely(ret < 0)) {
 			/*
 			 * In the offchance that we are buffer constrained,
@@ -1066,7 +1123,14 @@ ring_dequeue:
 #endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
 
 	/* get remaining objects from ring */
+#ifdef RTE_NEXT_ABI
 	ret = rte_mempool_ext_get_bulk(mp, obj_table, n);
+#else
+	if (is_mc)
+		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
+	else
+		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
+#endif
 
 	if (ret < 0)
 		__MEMPOOL_STAT_ADD(mp, get_fail, n);
@@ -1468,6 +1532,7 @@ ssize_t rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz,
  */
 void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *arg),
 		      void *arg);
+#ifdef RTE_NEXT_ABI
 
 /**
  * Function to get the name of a mempool handler
@@ -1541,6 +1606,7 @@ rte_mempool_create_ext(const char *name, unsigned n, unsigned elt_size,
 		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
 		int socket_id, unsigned flags,
 		const char *handler_name);
+#endif
 
 #ifdef __cplusplus
 }
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* Re: [dpdk-dev, 1/6] mempool: add external mempool manager support
  2016-02-16 14:48   ` [PATCH 1/6] mempool: add external mempool manager support David Hunt
@ 2016-02-16 19:27     ` Jan Viktorin
  2016-02-19 13:30     ` [PATCH " Olivier MATZ
  1 sibling, 0 replies; 238+ messages in thread
From: Jan Viktorin @ 2016-02-16 19:27 UTC (permalink / raw)
  To: David Hunt; +Cc: dev

Hello David,

(I wanted to reply to the 0/6 patch but I couldn't find it anywhere in
mbox format, nor on gmane which is strange.)

I could see both versions of the patch series quickly. I've got only
one question at the moment. Would it be possible to somehow integrate
calls to the dma_map/unmap_* interface of the kernel? I think, this is
possible to be implemented through the vfio iommu group zero (hope so).

The issue is that without being able to explicitly flush buffers before
a DMA transmission, it is not easily possible to use the mempool
manager on most ARMv7 chips. Well, it can be done by calling
dma_alloc_coherent and pass this memory into the mempool manager. But
such memory is very slow (it is, say, non-cacheable).

Regards
Jan

On Tue, 16 Feb 2016 14:48:10 +0000
David Hunt <david.hunt@intel.com> wrote:

> Adds the new rte_mempool_create_ext api and callback mechanism for
> external mempool handlers
> 
> Modifies the existing rte_mempool_create to set up the handler_idx to
> the relevant mempool handler based on the handler name:
>     ring_sp_sc
>     ring_mp_mc
>     ring_sp_mc
>     ring_mp_sc
> 
> v2: merges the duplicated code in rte_mempool_xmem_create and
> rte_mempool_create_ext into one common function. The old functions
> now call the new common function with the relevant parameters.
> 
> Signed-off-by: David Hunt <david.hunt@intel.com>
> 
> ---
> app/test/test_mempool_perf.c               |   1 -
>  lib/librte_mempool/Makefile                |   2 +
>  lib/librte_mempool/rte_mempool.c           | 383 ++++++++++++++++++-----------
>  lib/librte_mempool/rte_mempool.h           | 200 ++++++++++++---
>  lib/librte_mempool/rte_mempool_default.c   | 236 ++++++++++++++++++
>  lib/librte_mempool/rte_mempool_internal.h  |  75 ++++++
>  lib/librte_mempool/rte_mempool_version.map |   1 +
>  7 files changed, 717 insertions(+), 181 deletions(-)
>  create mode 100644 lib/librte_mempool/rte_mempool_default.c
>  create mode 100644 lib/librte_mempool/rte_mempool_internal.h
> 
> diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
> index cdc02a0..091c1df 100644
> --- a/app/test/test_mempool_perf.c
> +++ b/app/test/test_mempool_perf.c
> @@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
>  							   n_get_bulk);
>  				if (unlikely(ret < 0)) {
>  					rte_mempool_dump(stdout, mp);
> -					rte_ring_dump(stdout, mp->ring);
>  					/* in this case, objects are lost... */
>  					return -1;
>  				}
> diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
> index a6898ef..aeaffd1 100644
> --- a/lib/librte_mempool/Makefile
> +++ b/lib/librte_mempool/Makefile
> @@ -42,6 +42,8 @@ LIBABIVER := 1
>  
>  # all source are stored in SRCS-y
>  SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
> +SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
> +
>  ifeq ($(CONFIG_RTE_LIBRTE_XEN_DOM0),y)
>  SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_dom0_mempool.c
>  endif
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index aff5f6d..a577a3e 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -59,10 +59,11 @@
>  #include <rte_spinlock.h>
>  
>  #include "rte_mempool.h"
> +#include "rte_mempool_internal.h"
>  
>  TAILQ_HEAD(rte_mempool_list, rte_tailq_entry);
>  
> -static struct rte_tailq_elem rte_mempool_tailq = {
> +struct rte_tailq_elem rte_mempool_tailq = {
>  	.name = "RTE_MEMPOOL",
>  };
>  EAL_REGISTER_TAILQ(rte_mempool_tailq)
> @@ -149,7 +150,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, uint32_t obj_idx,
>  		obj_init(mp, obj_init_arg, obj, obj_idx);
>  
>  	/* enqueue in ring */
> -	rte_ring_sp_enqueue(mp->ring, obj);
> +	rte_mempool_ext_put_bulk(mp, &obj, 1);
>  }
>  
>  uint32_t
> @@ -375,26 +376,6 @@ rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz,
>  	return usz;
>  }
>  
> -#ifndef RTE_LIBRTE_XEN_DOM0
> -/* stub if DOM0 support not configured */
> -struct rte_mempool *
> -rte_dom0_mempool_create(const char *name __rte_unused,
> -			unsigned n __rte_unused,
> -			unsigned elt_size __rte_unused,
> -			unsigned cache_size __rte_unused,
> -			unsigned private_data_size __rte_unused,
> -			rte_mempool_ctor_t *mp_init __rte_unused,
> -			void *mp_init_arg __rte_unused,
> -			rte_mempool_obj_ctor_t *obj_init __rte_unused,
> -			void *obj_init_arg __rte_unused,
> -			int socket_id __rte_unused,
> -			unsigned flags __rte_unused)
> -{
> -	rte_errno = EINVAL;
> -	return NULL;
> -}
> -#endif
> -
>  /* create the mempool */
>  struct rte_mempool *
>  rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
> @@ -420,117 +401,76 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
>  }
>  
>  /*
> + * Common mempool create function.
>   * Create the mempool over already allocated chunk of memory.
>   * That external memory buffer can consists of physically disjoint pages.
>   * Setting vaddr to NULL, makes mempool to fallback to original behaviour
> - * and allocate space for mempool and it's elements as one big chunk of
> - * physically continuos memory.
> - * */
> -struct rte_mempool *
> -rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
> + * which will call rte_mempool_ext_alloc to allocate the object memory.
> + * If it is an intenal mempool handler, it will allocate space for mempool
> + * and it's elements as one big chunk of physically continuous memory.
> + * If it is an external mempool handler, it will allocate space for mempool
> + * and call the rte_mempool_ext_alloc for the object memory.
> + */
> +static struct rte_mempool *
> +mempool_create(const char *name,
> +		unsigned num_elt, unsigned elt_size,
>  		unsigned cache_size, unsigned private_data_size,
>  		rte_mempool_ctor_t *mp_init, void *mp_init_arg,
>  		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
> -		int socket_id, unsigned flags, void *vaddr,
> -		const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift)
> +		int socket_id, unsigned flags,
> +		void *vaddr, const phys_addr_t paddr[],
> +		uint32_t pg_num, uint32_t pg_shift,
> +		const char *handler_name)
>  {
> -	char mz_name[RTE_MEMZONE_NAMESIZE];
> -	char rg_name[RTE_RING_NAMESIZE];
> +	const struct rte_memzone *mz;
>  	struct rte_mempool_list *mempool_list;
>  	struct rte_mempool *mp = NULL;
>  	struct rte_tailq_entry *te;
> -	struct rte_ring *r;
> -	const struct rte_memzone *mz;
> -	size_t mempool_size;
> +	char mz_name[RTE_MEMZONE_NAMESIZE];
>  	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
> -	int rg_flags = 0;
> -	void *obj;
>  	struct rte_mempool_objsz objsz;
> -	void *startaddr;
> +	void *startaddr = NULL;
>  	int page_size = getpagesize();
> -
> -	/* compilation-time checks */
> -	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool) &
> -			  RTE_CACHE_LINE_MASK) != 0);
> -#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
> -	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_cache) &
> -			  RTE_CACHE_LINE_MASK) != 0);
> -	RTE_BUILD_BUG_ON((offsetof(struct rte_mempool, local_cache) &
> -			  RTE_CACHE_LINE_MASK) != 0);
> -#endif
> -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> -	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_debug_stats) &
> -			  RTE_CACHE_LINE_MASK) != 0);
> -	RTE_BUILD_BUG_ON((offsetof(struct rte_mempool, stats) &
> -			  RTE_CACHE_LINE_MASK) != 0);
> -#endif
> +	void *obj = NULL;
> +	size_t mempool_size;
>  
>  	mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list);
>  
>  	/* asked cache too big */
>  	if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE ||
> -	    CALC_CACHE_FLUSHTHRESH(cache_size) > n) {
> +		CALC_CACHE_FLUSHTHRESH(cache_size) > num_elt) {
>  		rte_errno = EINVAL;
>  		return NULL;
>  	}
>  
> -	/* check that we have both VA and PA */
> -	if (vaddr != NULL && paddr == NULL) {
> -		rte_errno = EINVAL;
> -		return NULL;
> -	}
> -
> -	/* Check that pg_num and pg_shift parameters are valid. */
> -	if (pg_num < RTE_DIM(mp->elt_pa) || pg_shift > MEMPOOL_PG_SHIFT_MAX) {
> -		rte_errno = EINVAL;
> -		return NULL;
> -	}
> -
> -	/* "no cache align" imply "no spread" */
> -	if (flags & MEMPOOL_F_NO_CACHE_ALIGN)
> -		flags |= MEMPOOL_F_NO_SPREAD;
> +	if (flags && MEMPOOL_F_INT_HANDLER) {
> +		/* Check that pg_num and pg_shift parameters are valid. */
> +		if (pg_num < RTE_DIM(mp->elt_pa) ||
> +				pg_shift > MEMPOOL_PG_SHIFT_MAX) {
> +			rte_errno = EINVAL;
> +			return NULL;
> +		}
>  
> -	/* ring flags */
> -	if (flags & MEMPOOL_F_SP_PUT)
> -		rg_flags |= RING_F_SP_ENQ;
> -	if (flags & MEMPOOL_F_SC_GET)
> -		rg_flags |= RING_F_SC_DEQ;
> +		/* "no cache align" imply "no spread" */
> +		if (flags & MEMPOOL_F_NO_CACHE_ALIGN)
> +			flags |= MEMPOOL_F_NO_SPREAD;
>  
> -	/* calculate mempool object sizes. */
> -	if (!rte_mempool_calc_obj_size(elt_size, flags, &objsz)) {
> -		rte_errno = EINVAL;
> -		return NULL;
> +		/* calculate mempool object sizes. */
> +		if (!rte_mempool_calc_obj_size(elt_size, flags, &objsz)) {
> +			rte_errno = EINVAL;
> +			return NULL;
> +		}
>  	}
>  
>  	rte_rwlock_write_lock(RTE_EAL_MEMPOOL_RWLOCK);
>  
> -	/* allocate the ring that will be used to store objects */
> -	/* Ring functions will return appropriate errors if we are
> -	 * running as a secondary process etc., so no checks made
> -	 * in this function for that condition */
> -	snprintf(rg_name, sizeof(rg_name), RTE_MEMPOOL_MZ_FORMAT, name);
> -	r = rte_ring_create(rg_name, rte_align32pow2(n+1), socket_id, rg_flags);
> -	if (r == NULL)
> -		goto exit;
> -
>  	/*
>  	 * reserve a memory zone for this mempool: private data is
>  	 * cache-aligned
>  	 */
> -	private_data_size = (private_data_size +
> -			     RTE_MEMPOOL_ALIGN_MASK) & (~RTE_MEMPOOL_ALIGN_MASK);
> +	private_data_size = RTE_ALIGN_CEIL(private_data_size,
> +						RTE_MEMPOOL_ALIGN);
>  
> -	if (! rte_eal_has_hugepages()) {
> -		/*
> -		 * expand private data size to a whole page, so that the
> -		 * first pool element will start on a new standard page
> -		 */
> -		int head = sizeof(struct rte_mempool);
> -		int new_size = (private_data_size + head) % page_size;
> -		if (new_size) {
> -			private_data_size += page_size - new_size;
> -		}
> -	}
>  
>  	/* try to allocate tailq entry */
>  	te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0);
> @@ -539,23 +479,51 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
>  		goto exit;
>  	}
>  
> -	/*
> -	 * If user provided an external memory buffer, then use it to
> -	 * store mempool objects. Otherwise reserve a memzone that is large
> -	 * enough to hold mempool header and metadata plus mempool objects.
> -	 */
> -	mempool_size = MEMPOOL_HEADER_SIZE(mp, pg_num) + private_data_size;
> -	mempool_size = RTE_ALIGN_CEIL(mempool_size, RTE_MEMPOOL_ALIGN);
> -	if (vaddr == NULL)
> -		mempool_size += (size_t)objsz.total_size * n;
> +	if (flags && MEMPOOL_F_INT_HANDLER) {
>  
> -	if (! rte_eal_has_hugepages()) {
> +		if (!rte_eal_has_hugepages()) {
> +			/*
> +			 * expand private data size to a whole page, so that the
> +			 * first pool element will start on a new standard page
> +			 */
> +			int head = sizeof(struct rte_mempool);
> +			int new_size = (private_data_size + head) % page_size;
> +
> +			if (new_size)
> +				private_data_size += page_size - new_size;
> +		}
> +
> +
> +		/*
> +		 * If user provided an external memory buffer, then use it to
> +		 * store mempool objects. Otherwise reserve a memzone that is
> +		 * large enough to hold mempool header and metadata plus
> +		 * mempool objects
> +		 */
> +		mempool_size =
> +			MEMPOOL_HEADER_SIZE(mp, pg_num) + private_data_size;
> +		mempool_size =
> +			RTE_ALIGN_CEIL(mempool_size, RTE_MEMPOOL_ALIGN);
> +		if (vaddr == NULL)
> +			mempool_size += (size_t)objsz.total_size * num_elt;
> +
> +		if (!rte_eal_has_hugepages()) {
> +			/*
> +			 * we want the memory pool to start on a page boundary,
> +			 * because pool elements crossing page boundaries would
> +			 * result in discontiguous physical addresses
> +			 */
> +			mempool_size += page_size;
> +		}
> +	} else {
>  		/*
> -		 * we want the memory pool to start on a page boundary,
> -		 * because pool elements crossing page boundaries would
> -		 * result in discontiguous physical addresses
> +		 * If user provided an external memory buffer, then use it to
> +		 * store mempool objects. Otherwise reserve a memzone that is
> +		 * large enough to hold mempool header and metadata plus
> +		 * mempool objects
>  		 */
> -		mempool_size += page_size;
> +		mempool_size = sizeof(*mp) + private_data_size;
> +		mempool_size = RTE_ALIGN_CEIL(mempool_size, RTE_MEMPOOL_ALIGN);
>  	}
>  
>  	snprintf(mz_name, sizeof(mz_name), RTE_MEMPOOL_MZ_FORMAT, name);
> @@ -563,24 +531,29 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
>  	mz = rte_memzone_reserve(mz_name, mempool_size, socket_id, mz_flags);
>  
>  	/*
> -	 * no more memory: in this case we loose previously reserved
> -	 * space for the ring as we cannot free it
> +	 * no more memory
>  	 */
>  	if (mz == NULL) {
>  		rte_free(te);
>  		goto exit;
>  	}
>  
> -	if (rte_eal_has_hugepages()) {
> -		startaddr = (void*)mz->addr;
> -	} else {
> -		/* align memory pool start address on a page boundary */
> -		unsigned long addr = (unsigned long)mz->addr;
> -		if (addr & (page_size - 1)) {
> -			addr += page_size;
> -			addr &= ~(page_size - 1);
> +	if (flags && MEMPOOL_F_INT_HANDLER) {
> +
> +		if (rte_eal_has_hugepages()) {
> +			startaddr = (void *)mz->addr;
> +		} else {
> +			/* align memory pool start address on a page boundary */
> +			unsigned long addr = (unsigned long)mz->addr;
> +
> +			if (addr & (page_size - 1)) {
> +				addr += page_size;
> +				addr &= ~(page_size - 1);
> +			}
> +			startaddr = (void *)addr;
>  		}
> -		startaddr = (void*)addr;
> +	} else {
> +		startaddr = (void *)mz->addr;
>  	}
>  
>  	/* init the mempool structure */
> @@ -588,8 +561,7 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
>  	memset(mp, 0, sizeof(*mp));
>  	snprintf(mp->name, sizeof(mp->name), "%s", name);
>  	mp->phys_addr = mz->phys_addr;
> -	mp->ring = r;
> -	mp->size = n;
> +	mp->size = num_elt;
>  	mp->flags = flags;
>  	mp->elt_size = objsz.elt_size;
>  	mp->header_size = objsz.header_size;
> @@ -598,35 +570,54 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
>  	mp->cache_flushthresh = CALC_CACHE_FLUSHTHRESH(cache_size);
>  	mp->private_data_size = private_data_size;
>  
> -	/* calculate address of the first element for continuous mempool. */
> -	obj = (char *)mp + MEMPOOL_HEADER_SIZE(mp, pg_num) +
> -		private_data_size;
> -	obj = RTE_PTR_ALIGN_CEIL(obj, RTE_MEMPOOL_ALIGN);
> -
> -	/* populate address translation fields. */
> -	mp->pg_num = pg_num;
> -	mp->pg_shift = pg_shift;
> -	mp->pg_mask = RTE_LEN2MASK(mp->pg_shift, typeof(mp->pg_mask));
> +	mp->handler_idx = rte_get_mempool_handler_idx(handler_name);
> +	if (mp->handler_idx < 0) {
> +		RTE_LOG(ERR, MEMPOOL, "Cannot find mempool handler by name!\n");
> +		rte_free(te);
> +		goto exit;
> +	}
>  
> -	/* mempool elements allocated together with mempool */
> -	if (vaddr == NULL) {
> -		mp->elt_va_start = (uintptr_t)obj;
> -		mp->elt_pa[0] = mp->phys_addr +
> -			(mp->elt_va_start - (uintptr_t)mp);
> +	if (flags && MEMPOOL_F_INT_HANDLER) {
> +		/* calculate address of first element for continuous mempool. */
> +		obj = (char *)mp + MEMPOOL_HEADER_SIZE(mp, pg_num) +
> +			private_data_size;
> +		obj = RTE_PTR_ALIGN_CEIL(obj, RTE_MEMPOOL_ALIGN);
> +
> +		/* populate address translation fields. */
> +		mp->pg_num = pg_num;
> +		mp->pg_shift = pg_shift;
> +		mp->pg_mask = RTE_LEN2MASK(mp->pg_shift, typeof(mp->pg_mask));
> +
> +		/* mempool elements allocated together with mempool */
> +		if (vaddr == NULL) {
> +			mp->elt_va_start = (uintptr_t)obj;
> +			mp->elt_pa[0] = mp->phys_addr +
> +				(mp->elt_va_start - (uintptr_t)mp);
> +		/* mempool elements in a separate chunk of memory. */
> +		} else {
> +			mp->elt_va_start = (uintptr_t)vaddr;
> +			memcpy(mp->elt_pa, paddr,
> +				sizeof(mp->elt_pa[0]) * pg_num);
> +		}
>  
> -	/* mempool elements in a separate chunk of memory. */
> -	} else {
> -		mp->elt_va_start = (uintptr_t)vaddr;
> -		memcpy(mp->elt_pa, paddr, sizeof (mp->elt_pa[0]) * pg_num);
> +		mp->elt_va_end = mp->elt_va_start;
>  	}
>  
> -	mp->elt_va_end = mp->elt_va_start;
> +	/* Parameters are setup. Call the mempool handler alloc */
> +	mp->rt_pool =
> +		rte_mempool_ext_alloc(mp, name, num_elt, socket_id, flags);
> +	if (mp->rt_pool == NULL) {
> +		RTE_LOG(ERR, MEMPOOL, "Failed to alloc mempool!\n");
> +		rte_free(te);
> +		goto exit;
> +	}
>  
>  	/* call the initializer */
>  	if (mp_init)
>  		mp_init(mp, mp_init_arg);
>  
> -	mempool_populate(mp, n, 1, obj_init, obj_init_arg);
> +	if (obj_init)
> +		mempool_populate(mp, num_elt, 1, obj_init, obj_init_arg);
>  
>  	te->data = (void *) mp;
>  
> @@ -640,13 +631,79 @@ exit:
>  	return mp;
>  }
>  
> +/* Create the mempool over already allocated chunk of memory */
> +struct rte_mempool *
> +rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
> +		unsigned cache_size, unsigned private_data_size,
> +		rte_mempool_ctor_t *mp_init, void *mp_init_arg,
> +		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
> +		int socket_id, unsigned flags, void *vaddr,
> +		const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift)
> +{
> +	struct rte_mempool *mp = NULL;
> +	char handler_name[RTE_MEMPOOL_NAMESIZE];
> +
> +
> +	/* compilation-time checks */
> +	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool) &
> +			  RTE_CACHE_LINE_MASK) != 0);
> +#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
> +	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_cache) &
> +			  RTE_CACHE_LINE_MASK) != 0);
> +	RTE_BUILD_BUG_ON((offsetof(struct rte_mempool, local_cache) &
> +			  RTE_CACHE_LINE_MASK) != 0);
> +#endif
> +#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> +	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_debug_stats) &
> +			  RTE_CACHE_LINE_MASK) != 0);
> +	RTE_BUILD_BUG_ON((offsetof(struct rte_mempool, stats) &
> +			  RTE_CACHE_LINE_MASK) != 0);
> +#endif
> +
> +
> +	/* check that we have both VA and PA */
> +	if (vaddr != NULL && paddr == NULL) {
> +		rte_errno = EINVAL;
> +		return NULL;
> +	}
> +
> +	/*
> +	 * Since we have 4 combinations of the SP/SC/MP/MC, and stack,
> +	 * examine the
> +	 * flags to set the correct index into the handler table.
> +	 */
> +	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> +		sprintf(handler_name, "%s", "ring_sp_sc");
> +	else if (flags & MEMPOOL_F_SP_PUT)
> +		sprintf(handler_name, "%s", "ring_sp_mc");
> +	else if (flags & MEMPOOL_F_SC_GET)
> +		sprintf(handler_name, "%s", "ring_mp_sc");
> +	else
> +		sprintf(handler_name, "%s", "ring_mp_mc");
> +
> +	flags |= MEMPOOL_F_INT_HANDLER;
> +
> +	mp = mempool_create(name,
> +		n, elt_size,
> +		cache_size, private_data_size,
> +		mp_init, mp_init_arg,
> +		obj_init, obj_init_arg,
> +		socket_id,
> +		flags,
> +		vaddr, paddr,
> +		pg_num, pg_shift,
> +		handler_name);
> +
> +	return mp;
> +}
> +
>  /* Return the number of entries in the mempool */
>  unsigned
>  rte_mempool_count(const struct rte_mempool *mp)
>  {
>  	unsigned count;
>  
> -	count = rte_ring_count(mp->ring);
> +	count = rte_mempool_ext_get_count(mp);
>  
>  #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>  	{
> @@ -802,7 +859,6 @@ rte_mempool_dump(FILE *f, const struct rte_mempool *mp)
>  
>  	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
>  	fprintf(f, "  flags=%x\n", mp->flags);
> -	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
>  	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->phys_addr);
>  	fprintf(f, "  size=%"PRIu32"\n", mp->size);
>  	fprintf(f, "  header_size=%"PRIu32"\n", mp->header_size);
> @@ -825,7 +881,7 @@ rte_mempool_dump(FILE *f, const struct rte_mempool *mp)
>  			mp->size);
>  
>  	cache_count = rte_mempool_dump_cache(f, mp);
> -	common_count = rte_ring_count(mp->ring);
> +	common_count = rte_mempool_ext_get_count(mp);
>  	if ((cache_count + common_count) > mp->size)
>  		common_count = mp->size - cache_count;
>  	fprintf(f, "  common_pool_count=%u\n", common_count);
> @@ -919,3 +975,30 @@ void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *),
>  
>  	rte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK);
>  }
> +
> +
> +/* create the mempool using an external mempool manager */
> +struct rte_mempool *
> +rte_mempool_create_ext(const char *name, unsigned n, unsigned elt_size,
> +	unsigned cache_size, unsigned private_data_size,
> +	rte_mempool_ctor_t *mp_init, void *mp_init_arg,
> +	rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
> +	int socket_id, unsigned flags,
> +	const char *handler_name)
> +{
> +	struct rte_mempool *mp = NULL;
> +
> +	mp = mempool_create(name,
> +		n, elt_size,
> +		cache_size, private_data_size,
> +		mp_init, mp_init_arg,
> +		obj_init, obj_init_arg,
> +		socket_id, flags,
> +		NULL, NULL,              /* vaddr, paddr */
> +		0, 0,                    /* pg_num, pg_shift, */
> +		handler_name);
> +
> +	return mp;
> +
> +
> +}
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index 9745bf0..3705fbd 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -88,6 +88,8 @@ extern "C" {
>  struct rte_mempool_debug_stats {
>  	uint64_t put_bulk;         /**< Number of puts. */
>  	uint64_t put_objs;         /**< Number of objects successfully put. */
> +	uint64_t put_pool_bulk;    /**< Number of puts into pool. */
> +	uint64_t put_pool_objs;    /**< Number of objects into pool. */
>  	uint64_t get_success_bulk; /**< Successful allocation number. */
>  	uint64_t get_success_objs; /**< Objects successfully allocated. */
>  	uint64_t get_fail_bulk;    /**< Failed allocation number. */
> @@ -175,12 +177,85 @@ struct rte_mempool_objtlr {
>  #endif
>  };
>  
> +/* Handler functions for external mempool support */
> +typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp,
> +		const char *name, unsigned n, int socket_id, unsigned flags);
> +typedef int (*rte_mempool_put_t)(void *p,
> +		void * const *obj_table, unsigned n);
> +typedef int (*rte_mempool_get_t)(void *p, void **obj_table,
> +		unsigned n);
> +typedef unsigned (*rte_mempool_get_count)(void *p);
> +typedef int (*rte_mempool_free_t)(struct rte_mempool *mp);
> +
> +/**
> + * @internal wrapper for external mempool manager alloc callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @param name
> + *   Name of the memory pool.
> + * @param n
> + *   Number of objects in the mempool.
> + * @param socket_id
> + *   socket id on which to allocate.
> + * @param flags
> + *   general flags to allocate function (MEMPOOL_F_* flags)
> + */
> +void *
> +rte_mempool_ext_alloc(struct rte_mempool *mp,
> +		const char *name, unsigned n, int socket_id, unsigned flags);
> +
> +/**
> + * @internal wrapper for external mempool manager get callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @param obj_table
> + *   Pointer to a table of void * pointers (objects).
> + * @param n
> + *	 Number of objects to get
> + */
> +int
> +rte_mempool_ext_get_bulk(struct rte_mempool *mp, void **obj_table,
> +		unsigned n);
> +
> +/**
> + * @internal wrapper for external mempool manager put callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @param obj_table
> + *   Pointer to a table of void * pointers (objects).
> + * @param n
> + *   Number of objects to put
> + */
> +int
> +rte_mempool_ext_put_bulk(struct rte_mempool *mp, void * const *obj_table,
> +		unsigned n);
> +
> +/**
> + * @internal wrapper for external mempool manager get_count callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + */
> +int
> +rte_mempool_ext_get_count(const struct rte_mempool *mp);
> +
> +/**
> + * @internal wrapper for external mempool manager free callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + */
> +int
> +rte_mempool_ext_free(struct rte_mempool *mp);
> +
>  /**
>   * The RTE mempool structure.
>   */
>  struct rte_mempool {
>  	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
> -	struct rte_ring *ring;           /**< Ring to store objects. */
>  	phys_addr_t phys_addr;           /**< Phys. addr. of mempool struct. */
>  	int flags;                       /**< Flags of the mempool. */
>  	uint32_t size;                   /**< Size of the mempool. */
> @@ -194,6 +269,11 @@ struct rte_mempool {
>  
>  	unsigned private_data_size;      /**< Size of private data. */
>  
> +	/* Common pool data structure pointer */
> +	void *rt_pool;
> +
> +	int16_t handler_idx;
> +
>  #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>  	/** Per-lcore local cache. */
>  	struct rte_mempool_cache local_cache[RTE_MAX_LCORE];
> @@ -223,6 +303,8 @@ struct rte_mempool {
>  #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
>  #define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
>  #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
> +#define MEMPOOL_F_INT_HANDLER    0x0020 /**< Using internal mempool handler */
> +
>  
>  /**
>   * @internal When debug is enabled, store some statistics.
> @@ -753,7 +835,7 @@ void rte_mempool_dump(FILE *f, const struct rte_mempool *mp);
>   */
>  static inline void __attribute__((always_inline))
>  __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
> -		    unsigned n, int is_mp)
> +		    unsigned n, __attribute__((unused)) int is_mp)
>  {
>  #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>  	struct rte_mempool_cache *cache;
> @@ -769,8 +851,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>  
>  #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>  	/* cache is not enabled or single producer or non-EAL thread */
> -	if (unlikely(cache_size == 0 || is_mp == 0 ||
> -		     lcore_id >= RTE_MAX_LCORE))
> +	if (unlikely(cache_size == 0 || lcore_id >= RTE_MAX_LCORE))
>  		goto ring_enqueue;
>  
>  	/* Go straight to ring if put would overflow mem allocated for cache */
> @@ -793,8 +874,8 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>  
>  	cache->len += n;
>  
> -	if (cache->len >= flushthresh) {
> -		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
> +	if (unlikely(cache->len >= flushthresh)) {
> +		rte_mempool_ext_put_bulk(mp, &cache->objs[cache_size],
>  				cache->len - cache_size);
>  		cache->len = cache_size;
>  	}
> @@ -804,22 +885,10 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>  ring_enqueue:
>  #endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
>  
> -	/* push remaining objects in ring */
> -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> -	if (is_mp) {
> -		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
> -			rte_panic("cannot put objects in mempool\n");
> -	}
> -	else {
> -		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
> -			rte_panic("cannot put objects in mempool\n");
> -	}
> -#else
> -	if (is_mp)
> -		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
> -	else
> -		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
> -#endif
> +	/* Increment stats counter to tell us how many pool puts happened */
> +	__MEMPOOL_STAT_ADD(mp, put_pool, n);
> +
> +	rte_mempool_ext_put_bulk(mp, obj_table, n);
>  }
>  
>  
> @@ -943,7 +1012,7 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
>   */
>  static inline int __attribute__((always_inline))
>  __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
> -		   unsigned n, int is_mc)
> +		   unsigned n, __attribute__((unused))int is_mc)
>  {
>  	int ret;
>  #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
> @@ -954,8 +1023,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>  	uint32_t cache_size = mp->cache_size;
>  
>  	/* cache is not enabled or single consumer */
> -	if (unlikely(cache_size == 0 || is_mc == 0 ||
> -		     n >= cache_size || lcore_id >= RTE_MAX_LCORE))
> +	if (unlikely(cache_size == 0 || n >= cache_size ||
> +						lcore_id >= RTE_MAX_LCORE))
>  		goto ring_dequeue;
>  
>  	cache = &mp->local_cache[lcore_id];
> @@ -967,7 +1036,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>  		uint32_t req = n + (cache_size - cache->len);
>  
>  		/* How many do we require i.e. number to fill the cache + the request */
> -		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
> +		ret = rte_mempool_ext_get_bulk(mp,
> +						&cache->objs[cache->len], req);
>  		if (unlikely(ret < 0)) {
>  			/*
>  			 * In the offchance that we are buffer constrained,
> @@ -995,10 +1065,7 @@ ring_dequeue:
>  #endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
>  
>  	/* get remaining objects from ring */
> -	if (is_mc)
> -		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
> -	else
> -		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
> +	ret = rte_mempool_ext_get_bulk(mp, obj_table, n);
>  
>  	if (ret < 0)
>  		__MEMPOOL_STAT_ADD(mp, get_fail, n);
> @@ -1401,6 +1468,79 @@ ssize_t rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz,
>  void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *arg),
>  		      void *arg);
>  
> +/**
> + * Function to get the name of a mempool handler
> + *
> + * @param mp
> + *   A pointer to the mempool structure.
> + * @return
> + *   The name of the mempool handler
> + */
> +char *rte_mempool_get_handler_name(struct rte_mempool *mp);
> +
> +/**
> + * Create a new mempool named *name* in memory.
> + *
> + * This function uses an externally defined alloc callback to allocate memory.
> + * Its size is set to n elements.
> + * All elements of the mempool are allocated separately to the mempool header.
> + *
> + * @param name
> + *   The name of the mempool.
> + * @param n
> + *   The number of elements in the mempool. The optimum size (in terms of
> + *   memory usage) for a mempool is when n is a power of two minus one:
> + *   n = (2^q - 1).
> + * @param cache_size
> + *   If cache_size is non-zero, the rte_mempool library will try to
> + *   limit the accesses to the common lockless pool, by maintaining a
> + *   per-lcore object cache. This argument must be lower or equal to
> + *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5. It is advised to choose
> + *   cache_size to have "n modulo cache_size == 0": if this is
> + *   not the case, some elements will always stay in the pool and will
> + *   never be used. The access to the per-lcore table is of course
> + *   faster than the multi-producer/consumer pool. The cache can be
> + *   disabled if the cache_size argument is set to 0; it can be useful to
> + *   avoid losing objects in cache. Note that even if not used, the
> + *   memory space for cache is always reserved in a mempool structure,
> + *   except if CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE is set to 0.
> + * @param private_data_size
> + *   The size of the private data appended after the mempool
> + *   structure. This is useful for storing some private data after the
> + *   mempool structure, as is done for rte_mbuf_pool for example.
> + * @param mp_init
> + *   A function pointer that is called for initialization of the pool,
> + *   before object initialization. The user can initialize the private
> + *   data in this function if needed. This parameter can be NULL if
> + *   not needed.
> + * @param mp_init_arg
> + *   An opaque pointer to data that can be used in the mempool
> + *   constructor function.
> + * @param obj_init
> + *   A function pointer that is called for each object at
> + *   initialization of the pool. The user can set some meta data in
> + *   objects if needed. This parameter can be NULL if not needed.
> + *   The obj_init() function takes the mempool pointer, the init_arg,
> + *   the object pointer and the object number as parameters.
> + * @param obj_init_arg
> + *   An opaque pointer to data that can be used as an argument for
> + *   each call to the object constructor function.
> + * @param socket_id
> + *   The *socket_id* argument is the socket identifier in the case of
> + *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
> + *   constraint for the reserved zone.
> + * @param flags
> + * @return
> + *   The pointer to the new allocated mempool, on success. NULL on error
> + */
> +struct rte_mempool *
> +rte_mempool_create_ext(const char *name, unsigned n, unsigned elt_size,
> +		unsigned cache_size, unsigned private_data_size,
> +		rte_mempool_ctor_t *mp_init, void *mp_init_arg,
> +		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
> +		int socket_id, unsigned flags,
> +		const char *handler_name);
> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/lib/librte_mempool/rte_mempool_default.c b/lib/librte_mempool/rte_mempool_default.c
> new file mode 100644
> index 0000000..ca3255e
> --- /dev/null
> +++ b/lib/librte_mempool/rte_mempool_default.c
> @@ -0,0 +1,236 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
> + *   All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#include <stdio.h>
> +#include <rte_mempool.h>
> +#include <rte_malloc.h>
> +#include <string.h>
> +
> +#include "rte_mempool.h"
> +#include "rte_mempool_internal.h"
> +
> +/*
> + * Indirect jump table to support external memory pools
> + */
> +struct rte_mempool_handler_list mempool_handler_list = {
> +	.sl =  RTE_SPINLOCK_INITIALIZER ,
> +	.num_handlers = 0
> +};
> +
> +/*
> + * Returns the name of the mempool
> + */
> +char *
> +rte_mempool_get_handler_name(struct rte_mempool *mp) {
> +	return mempool_handler_list.handler[mp->handler_idx].name;
> +}
> +
> +int16_t
> +rte_mempool_register_handler(struct rte_mempool_handler *h)
> +{
> +	int16_t handler_idx;
> +
> +	/*  */
> +	rte_spinlock_lock(&mempool_handler_list.sl);
> +
> +	/* Check whether jump table has space */
> +	if (mempool_handler_list.num_handlers >= RTE_MEMPOOL_MAX_HANDLER_IDX) {
> +		rte_spinlock_unlock(&mempool_handler_list.sl);
> +		RTE_LOG(ERR, MEMPOOL,
> +				"Maximum number of mempool handlers exceeded\n");
> +		return -1;
> +	}
> +
> +	if ((h->put == NULL) || (h->get == NULL) ||
> +		(h->get_count == NULL)) {
> +		rte_spinlock_unlock(&mempool_handler_list.sl);
> +		 RTE_LOG(ERR, MEMPOOL,
> +					"Missing callback while registering mempool handler\n");
> +		return -1;
> +	}
> +
> +	/* add new handler index */
> +	handler_idx = mempool_handler_list.num_handlers++;
> +
> +	snprintf(mempool_handler_list.handler[handler_idx].name,
> +				RTE_MEMPOOL_NAMESIZE, "%s", h->name);
> +	mempool_handler_list.handler[handler_idx].alloc = h->alloc;
> +	mempool_handler_list.handler[handler_idx].put = h->put;
> +	mempool_handler_list.handler[handler_idx].get = h->get;
> +	mempool_handler_list.handler[handler_idx].get_count = h->get_count;
> +
> +	rte_spinlock_unlock(&mempool_handler_list.sl);
> +
> +	return handler_idx;
> +}
> +
> +int16_t
> +rte_get_mempool_handler_idx(const char *name)
> +{
> +	int16_t i;
> +
> +	for (i = 0; i < mempool_handler_list.num_handlers; i++) {
> +		if (!strcmp(name, mempool_handler_list.handler[i].name))
> +			return i;
> +	}
> +	return -1;
> +}
> +
> +static int
> +common_ring_mp_put(void *p, void * const *obj_table, unsigned n)
> +{
> +	return rte_ring_mp_enqueue_bulk((struct rte_ring *)p, obj_table, n);
> +}
> +
> +static int
> +common_ring_sp_put(void *p, void * const *obj_table, unsigned n)
> +{
> +	return rte_ring_sp_enqueue_bulk((struct rte_ring *)p, obj_table, n);
> +}
> +
> +static int
> +common_ring_mc_get(void *p, void **obj_table, unsigned n)
> +{
> +	return rte_ring_mc_dequeue_bulk((struct rte_ring *)p, obj_table, n);
> +}
> +
> +static int
> +common_ring_sc_get(void *p, void **obj_table, unsigned n)
> +{
> +	return rte_ring_sc_dequeue_bulk((struct rte_ring *)p, obj_table, n);
> +}
> +
> +static unsigned
> +common_ring_get_count(void *p)
> +{
> +	return rte_ring_count((struct rte_ring *)p);
> +}
> +
> +
> +static void *
> +rte_mempool_common_ring_alloc(struct rte_mempool *mp,
> +		const char *name, unsigned n, int socket_id, unsigned flags)
> +{
> +	struct rte_ring *r;
> +	char rg_name[RTE_RING_NAMESIZE];
> +	int rg_flags = 0;
> +
> +	if (flags & MEMPOOL_F_SP_PUT)
> +		rg_flags |= RING_F_SP_ENQ;
> +	if (flags & MEMPOOL_F_SC_GET)
> +		rg_flags |= RING_F_SC_DEQ;
> +
> +	/* allocate the ring that will be used to store objects */
> +	/* Ring functions will return appropriate errors if we are
> +	 * running as a secondary process etc., so no checks made
> +	 * in this function for that condition */
> +	snprintf(rg_name, sizeof(rg_name), "%s-ring", name);
> +	r = rte_ring_create(rg_name, rte_align32pow2(n+1), socket_id, rg_flags);
> +	if (r == NULL)
> +		return NULL;
> +
> +	mp->rt_pool = (void *)r;
> +
> +	return (void *) r;
> +}
> +
> +void *
> +rte_mempool_ext_alloc(struct rte_mempool *mp,
> +		const char *name, unsigned n, int socket_id, unsigned flags)
> +{
> +	if (mempool_handler_list.handler[mp->handler_idx].alloc) {
> +		return (mempool_handler_list.handler[mp->handler_idx].alloc)
> +						(mp, name, n, socket_id, flags);
> +	}
> +	return NULL;
> +}
> +
> +inline int __attribute__((always_inline))
> +rte_mempool_ext_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
> +{
> +	return (mempool_handler_list.handler[mp->handler_idx].get)
> +						(mp->rt_pool, obj_table, n);
> +}
> +
> +inline int __attribute__((always_inline))
> +rte_mempool_ext_put_bulk(struct rte_mempool *mp, void * const *obj_table,
> +		unsigned n)
> +{
> +	return (mempool_handler_list.handler[mp->handler_idx].put)
> +						(mp->rt_pool, obj_table, n);
> +}
> +
> +int
> +rte_mempool_ext_get_count(const struct rte_mempool *mp)
> +{
> +	return (mempool_handler_list.handler[mp->handler_idx].get_count)
> +						(mp->rt_pool);
> +}
> +
> +static struct rte_mempool_handler handler_mp_mc = {
> +	.name = "ring_mp_mc",
> +	.alloc = rte_mempool_common_ring_alloc,
> +	.put = common_ring_mp_put,
> +	.get = common_ring_mc_get,
> +	.get_count = common_ring_get_count,
> +	.free = NULL
> +};
> +static struct rte_mempool_handler handler_sp_sc = {
> +	.name = "ring_sp_sc",
> +	.alloc = rte_mempool_common_ring_alloc,
> +	.put = common_ring_sp_put,
> +	.get = common_ring_sc_get,
> +	.get_count = common_ring_get_count,
> +	.free = NULL
> +};
> +static struct rte_mempool_handler handler_mp_sc = {
> +	.name = "ring_mp_sc",
> +	.alloc = rte_mempool_common_ring_alloc,
> +	.put = common_ring_mp_put,
> +	.get = common_ring_sc_get,
> +	.get_count = common_ring_get_count,
> +	.free = NULL
> +};
> +static struct rte_mempool_handler handler_sp_mc = {
> +	.name = "ring_sp_mc",
> +	.alloc = rte_mempool_common_ring_alloc,
> +	.put = common_ring_sp_put,
> +	.get = common_ring_mc_get,
> +	.get_count = common_ring_get_count,
> +	.free = NULL
> +};
> +
> +REGISTER_MEMPOOL_HANDLER(handler_mp_mc);
> +REGISTER_MEMPOOL_HANDLER(handler_sp_sc);
> +REGISTER_MEMPOOL_HANDLER(handler_mp_sc);
> +REGISTER_MEMPOOL_HANDLER(handler_sp_mc);
> diff --git a/lib/librte_mempool/rte_mempool_internal.h b/lib/librte_mempool/rte_mempool_internal.h
> new file mode 100644
> index 0000000..982396f
> --- /dev/null
> +++ b/lib/librte_mempool/rte_mempool_internal.h
> @@ -0,0 +1,75 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
> + *   All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef _RTE_MEMPOOL_INTERNAL_H_
> +#define _RTE_MEMPOOL_INTERNAL_H_
> +
> +#include <rte_spinlock.h>
> +#include <rte_mempool.h>
> +
> +#define RTE_MEMPOOL_MAX_HANDLER_IDX 16
> +
> +struct rte_mempool_handler {
> +	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool handler */
> +
> +	rte_mempool_alloc_t alloc;
> +
> +	rte_mempool_get_count get_count;
> +
> +	rte_mempool_free_t free;
> +
> +	rte_mempool_put_t put;
> +
> +	rte_mempool_get_t get;
> +} __rte_cache_aligned;
> +
> +struct rte_mempool_handler_list {
> +	rte_spinlock_t sl;		  /**< Spinlock for add/delete. */
> +
> +	int32_t num_handlers;	  /**< Number of handlers that are valid. */
> +
> +	/* storage for all possible handlers */
> +	struct rte_mempool_handler handler[RTE_MEMPOOL_MAX_HANDLER_IDX];
> +};
> +
> +int16_t rte_mempool_register_handler(struct rte_mempool_handler *h);
> +int16_t rte_get_mempool_handler_idx(const char *name);
> +
> +#define REGISTER_MEMPOOL_HANDLER(h) \
> +static int16_t __attribute__((used)) testfn_##h(void);\
> +int16_t __attribute__((constructor, used)) testfn_##h(void)\
> +{\
> +	return rte_mempool_register_handler(&h);\
> +}
> +
> +#endif
> diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
> index 17151e0..589db27 100644
> --- a/lib/librte_mempool/rte_mempool_version.map
> +++ b/lib/librte_mempool/rte_mempool_version.map
> @@ -6,6 +6,7 @@ DPDK_2.0 {
>  	rte_mempool_calc_obj_size;
>  	rte_mempool_count;
>  	rte_mempool_create;
> +	rte_mempool_create_ext;
>  	rte_mempool_dump;
>  	rte_mempool_list_dump;
>  	rte_mempool_lookup;



-- 
   Jan Viktorin                  E-mail: Viktorin@RehiveTech.com
   System Architect              Web:    www.RehiveTech.com
   RehiveTech
   Brno, Czech Republic

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 0/6] external mempool manager
  2016-02-16 14:48 ` [PATCH 0/6] " David Hunt
                     ` (5 preceding siblings ...)
  2016-02-16 14:48   ` [PATCH 6/6] mempool: add in the RTE_NEXT_ABI protection for ABI breakages David Hunt
@ 2016-02-19 13:25   ` Olivier MATZ
  2016-02-29 10:55     ` Hunt, David
  2016-03-09  9:50   ` [PATCH v3 0/4] " David Hunt
  7 siblings, 1 reply; 238+ messages in thread
From: Olivier MATZ @ 2016-02-19 13:25 UTC (permalink / raw)
  To: David Hunt, dev

Hi,

On 02/16/2016 03:48 PM, David Hunt wrote:
> Hi list.
> 
> Here's the v2 version of a proposed patch for an external mempool manager

Just to notice the "v2" is missing in the title, it would help
to have it for next versions of the series.

Regards,
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 1/6] mempool: add external mempool manager support
  2016-02-16 14:48   ` [PATCH 1/6] mempool: add external mempool manager support David Hunt
  2016-02-16 19:27     ` [dpdk-dev, " Jan Viktorin
@ 2016-02-19 13:30     ` Olivier MATZ
  2016-02-29 11:11       ` Hunt, David
  1 sibling, 1 reply; 238+ messages in thread
From: Olivier MATZ @ 2016-02-19 13:30 UTC (permalink / raw)
  To: David Hunt, dev

Hi David,

On 02/16/2016 03:48 PM, David Hunt wrote:
> Adds the new rte_mempool_create_ext api and callback mechanism for
> external mempool handlers
> 
> Modifies the existing rte_mempool_create to set up the handler_idx to
> the relevant mempool handler based on the handler name:
>     ring_sp_sc
>     ring_mp_mc
>     ring_sp_mc
>     ring_mp_sc
> 
> v2: merges the duplicated code in rte_mempool_xmem_create and
> rte_mempool_create_ext into one common function. The old functions
> now call the new common function with the relevant parameters.
> 
> Signed-off-by: David Hunt <david.hunt@intel.com>

I think the refactoring of rte_mempool_create() (adding of
mempool_create()) should go in another commit. It will make the
patches much easier to read.

Also, I'm sorry but it seems that several comments or question I've made
in http://dpdk.org/ml/archives/dev/2016-February/032706.html are
not addressed.

Examples:
- putting some part of the patch in separate commits
- meaning of "rt_pool"
- put_pool_bulk unclear comment
- should we also have get_pool_bulk stats?
- missing _MEMPOOL_STAT_ADD() in mempool_bulk()
- why internal in rte_mempool_internal.h?
- why default in rte_mempool_default.c?
- remaining references to stack handler (in a comment)
- ...?

As you know, doing a proper code review takes a lot of time. If I
have to re-check all of my previous comments, it will take even
more. I'm not saying all my comments require a code change, but in case
you don't agree, please at least explain your opinion so we can debate
on the list.

Regards,
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 2/6] mempool: add stack (lifo) based external mempool handler
  2016-02-16 14:48   ` [PATCH 2/6] mempool: add stack (lifo) based external mempool handler David Hunt
@ 2016-02-19 13:31     ` Olivier MATZ
  2016-02-29 11:04       ` Hunt, David
  2016-03-08 20:45       ` Venkatesan, Venky
  0 siblings, 2 replies; 238+ messages in thread
From: Olivier MATZ @ 2016-02-19 13:31 UTC (permalink / raw)
  To: David Hunt, dev

Hi David,

On 02/16/2016 03:48 PM, David Hunt wrote:
> adds a simple stack based mempool handler
> 
> Signed-off-by: David Hunt <david.hunt@intel.com>
> ---
>  lib/librte_mempool/Makefile            |   2 +-
>  lib/librte_mempool/rte_mempool.c       |   4 +-
>  lib/librte_mempool/rte_mempool.h       |   1 +
>  lib/librte_mempool/rte_mempool_stack.c | 164 +++++++++++++++++++++++++++++++++
>  4 files changed, 169 insertions(+), 2 deletions(-)
>  create mode 100644 lib/librte_mempool/rte_mempool_stack.c
> 

I don't get what is the purpose of this handler. Is it an example
or is it something that could be useful for dpdk applications?

If it's an example, we should find a way to put the code outside
the librte_mempool library, maybe in the test program. I see there
is also a "custom handler". Do we really need to have both?


Regards,
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 6/6] mempool: add in the RTE_NEXT_ABI protection for ABI breakages
  2016-02-16 14:48   ` [PATCH 6/6] mempool: add in the RTE_NEXT_ABI protection for ABI breakages David Hunt
@ 2016-02-19 13:33     ` Olivier MATZ
  0 siblings, 0 replies; 238+ messages in thread
From: Olivier MATZ @ 2016-02-19 13:33 UTC (permalink / raw)
  To: David Hunt, dev

Hi David,

On 02/16/2016 03:48 PM, David Hunt wrote:
> v2: Kept all the NEXT_ABI defs to this patch so as to make the
> previous patches easier to read, and also to imake it clear what
> code is necessary to keep ABI compatibility when NEXT_ABI is
> disabled.
> 
> Signed-off-by: David Hunt <david.hunt@intel.com>
> ---
>  app/test/Makefile                |   2 +
>  app/test/test_mempool_perf.c     |   3 +
>  lib/librte_mbuf/rte_mbuf.c       |   7 ++
>  lib/librte_mempool/Makefile      |   2 +
>  lib/librte_mempool/rte_mempool.c | 240 ++++++++++++++++++++++++++++++++++++++-
>  lib/librte_mempool/rte_mempool.h |  68 ++++++++++-
>  6 files changed, 320 insertions(+), 2 deletions(-)

Given the size of this patch, I don't think it's worth adding the
NEXT ABI in that case.

Regards,
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 0/6] external mempool manager
  2016-02-19 13:25   ` [PATCH 0/6] external mempool manager Olivier MATZ
@ 2016-02-29 10:55     ` Hunt, David
  0 siblings, 0 replies; 238+ messages in thread
From: Hunt, David @ 2016-02-29 10:55 UTC (permalink / raw)
  To: Olivier MATZ, dev



On 2/19/2016 1:25 PM, Olivier MATZ wrote:
> Hi,
>
> On 02/16/2016 03:48 PM, David Hunt wrote:
>> Hi list.
>>
>> Here's the v2 version of a proposed patch for an external mempool manager
> Just to notice the "v2" is missing in the title, it would help
> to have it for next versions of the series.
>
Thanks, Olivier, I will ensure it's in the next patchset.
Regards,
David.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 2/6] mempool: add stack (lifo) based external mempool handler
  2016-02-19 13:31     ` Olivier MATZ
@ 2016-02-29 11:04       ` Hunt, David
  2016-03-04  9:04         ` Olivier MATZ
  2016-03-08 20:45       ` Venkatesan, Venky
  1 sibling, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-02-29 11:04 UTC (permalink / raw)
  To: Olivier MATZ, dev


On 2/19/2016 1:31 PM, Olivier MATZ wrote:
> Hi David,
>
> On 02/16/2016 03:48 PM, David Hunt wrote:
>> adds a simple stack based mempool handler
>>
>> Signed-off-by: David Hunt <david.hunt@intel.com>
>> ---
>>   lib/librte_mempool/Makefile            |   2 +-
>>   lib/librte_mempool/rte_mempool.c       |   4 +-
>>   lib/librte_mempool/rte_mempool.h       |   1 +
>>   lib/librte_mempool/rte_mempool_stack.c | 164 +++++++++++++++++++++++++++++++++
>>   4 files changed, 169 insertions(+), 2 deletions(-)
>>   create mode 100644 lib/librte_mempool/rte_mempool_stack.c
>>
> I don't get what is the purpose of this handler. Is it an example
> or is it something that could be useful for dpdk applications?
>
> If it's an example, we should find a way to put the code outside
> the librte_mempool library, maybe in the test program. I see there
> is also a "custom handler". Do we really need to have both?
They are both example handlers. I agree that we could reduce down to 
one, and since the 'custom' handler has autotests, I would suggest we 
keep that one.

The next question is where it should live. I agree that it's not ideal 
to have example code living in the same directory as the mempool 
library, but they are an integral part of the library itself. How about 
creating a handlers sub-directory? We could then keep all additional and 
sample handlers in there, away from the built-in handlers. Also, seeing 
as the handler code is intended to be part of the library, I think 
moving it out to the examples directory may confuse matters further.

Regards,
David.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 1/6] mempool: add external mempool manager support
  2016-02-19 13:30     ` [PATCH " Olivier MATZ
@ 2016-02-29 11:11       ` Hunt, David
  2016-03-04  9:04         ` Olivier MATZ
  0 siblings, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-02-29 11:11 UTC (permalink / raw)
  To: Olivier MATZ, dev


On 2/19/2016 1:30 PM, Olivier MATZ wrote:
> Hi David,
>
> On 02/16/2016 03:48 PM, David Hunt wrote:
>> Adds the new rte_mempool_create_ext api and callback mechanism for
>> external mempool handlers
>>
>> Modifies the existing rte_mempool_create to set up the handler_idx to
>> the relevant mempool handler based on the handler name:
>>      ring_sp_sc
>>      ring_mp_mc
>>      ring_sp_mc
>>      ring_mp_sc
>>
>> v2: merges the duplicated code in rte_mempool_xmem_create and
>> rte_mempool_create_ext into one common function. The old functions
>> now call the new common function with the relevant parameters.
>>
>> Signed-off-by: David Hunt <david.hunt@intel.com>
> I think the refactoring of rte_mempool_create() (adding of
> mempool_create()) should go in another commit. It will make the
> patches much easier to read.
>
> Also, I'm sorry but it seems that several comments or question I've made
> in http://dpdk.org/ml/archives/dev/2016-February/032706.html are
> not addressed.
>
> Examples:
> - putting some part of the patch in separate commits
> - meaning of "rt_pool"
> - put_pool_bulk unclear comment
> - should we also have get_pool_bulk stats?
> - missing _MEMPOOL_STAT_ADD() in mempool_bulk()
> - why internal in rte_mempool_internal.h?
> - why default in rte_mempool_default.c?
> - remaining references to stack handler (in a comment)
> - ...?
>
> As you know, doing a proper code review takes a lot of time. If I
> have to re-check all of my previous comments, it will take even
> more. I'm not saying all my comments require a code change, but in case
> you don't agree, please at least explain your opinion so we can debate
> on the list.
>
Hi Olivier,
    Sincerest apologies. I had intended in coming back around to your 
original comments after refactoring the code. I will do that now. I did 
take them into consideration, but I see now that I need to do further 
work, such as a clearer name for rt_pool, etc.  I will respond to your 
original email.
Thanks
David.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 1/5] mempool: add external mempool manager support
  2016-02-04 14:52   ` Olivier MATZ
  2016-02-04 16:47     ` Hunt, David
  2016-02-04 17:34     ` Hunt, David
@ 2016-03-01 13:32     ` Hunt, David
  2016-03-04  9:05       ` Olivier MATZ
  2 siblings, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-03-01 13:32 UTC (permalink / raw)
  To: Olivier MATZ, dev

Olivier,
     Here's my comments on your feedback. Hopefully I've covered all of 
it this time, and I've summarised the outstanding questions at the bottom.

On 2/4/2016 2:52 PM, Olivier MATZ wrote:
>
>> -#ifndef RTE_LIBRTE_XEN_DOM0
>> -/* stub if DOM0 support not configured */
>> -struct rte_mempool *
>> -rte_dom0_mempool_create(const char *name __rte_unused,
>> -            unsigned n __rte_unused,
>> -            unsigned elt_size __rte_unused,
>> -            unsigned cache_size __rte_unused,
>> -            unsigned private_data_size __rte_unused,
>> -            rte_mempool_ctor_t *mp_init __rte_unused,
>> -            void *mp_init_arg __rte_unused,
>> -            rte_mempool_obj_ctor_t *obj_init __rte_unused,
>> -            void *obj_init_arg __rte_unused,
>> -            int socket_id __rte_unused,
>> -            unsigned flags __rte_unused)
>> -{
>> -    rte_errno = EINVAL;
>> -    return NULL;
>> -}
>> -#endif
>> -
>
> Could we move this is a separated commit?
> "mempool: remove unused rte_dom0_mempool_create stub"

Will do for v3.


--snip--
> return rte_mempool_xmem_create(name, n, elt_size,
>> -                           cache_size, private_data_size,
>> -                           mp_init, mp_init_arg,
>> -                           obj_init, obj_init_arg,
>> -                           socket_id, flags,
>> -                           NULL, NULL, MEMPOOL_PG_NUM_DEFAULT,
>> -                           MEMPOOL_PG_SHIFT_MAX);
>> +            cache_size, private_data_size,
>> +            mp_init, mp_init_arg,
>> +            obj_init, obj_init_arg,
>> +            socket_id, flags,
>> +            NULL, NULL,
>> +            MEMPOOL_PG_NUM_DEFAULT, MEMPOOL_PG_SHIFT_MAX);
>>   }
>
> As far as I can see, you are not modifying the code here, only the
> style. For better readability, it should go in another commit that
> only fixes indent or style issues.
>

I've removed any changes to style in v2. Only makes things more 
difficult to read.

> Also, I think the proper indentation is to use only one tab for the
> subsequent lines.

I've done this in v2.

>
>> @@ -598,6 +568,22 @@ rte_mempool_xmem_create(const char *name, 
>> unsigned n, unsigned elt_size,
>>       mp->cache_flushthresh = CALC_CACHE_FLUSHTHRESH(cache_size);
>>       mp->private_data_size = private_data_size;
>>
>> +    /*
>> +     * Since we have 4 combinations of the SP/SC/MP/MC, and stack,
>> +     * examine the
>> +     * flags to set the correct index into the handler table.
>> +     */
>
> nit: comment style is not correct
>

Will fix.

>> +    if (flags & MEMPOOL_F_USE_STACK)
>> +        mp->handler_idx = rte_get_mempool_handler("stack");
>
> The stack handler does not exist yet, it is introduced in the next
> commit. I think this code should be moved in the next commit too.

Done in v2

>
>> @@ -622,6 +607,10 @@ rte_mempool_xmem_create(const char *name, 
>> unsigned n, unsigned elt_size,
>>
>>       mp->elt_va_end = mp->elt_va_start;
>>
>> +    /* Parameters are setup. Call the mempool handler alloc */
>> +    if ((rte_mempool_ext_alloc(mp, name, n, socket_id, flags)) == NULL)
>> +        goto exit;
>> +
>
> I think some memory needs to be freed here. At least 'te'.

Done in v2

>> @@ -681,7 +670,9 @@ rte_mempool_dump_cache(FILE *f, const struct 
>> rte_mempool *mp)
>>       fprintf(f, "    cache_size=%"PRIu32"\n", mp->cache_size);
>>       for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
>>           cache_count = mp->local_cache[lcore_id].len;
>> -        fprintf(f, "    cache_count[%u]=%u\n", lcore_id, cache_count);
>> +        if (cache_count > 0)
>> +            fprintf(f, "    cache_count[%u]=%u\n",
>> +                        lcore_id, cache_count);
>>           count += cache_count;
>>       }
>>       fprintf(f, "    total_cache_count=%u\n", count);
>
> This could also be moved in a separate commit.

Removed this change, as it's not really relevant to mempool manager

>> @@ -825,7 +815,7 @@ rte_mempool_dump(FILE *f, const struct 
>> rte_mempool *mp)
>>               mp->size);
>>
>>       cache_count = rte_mempool_dump_cache(f, mp);
>> -    common_count = rte_ring_count(mp->ring);
>> +    common_count = /* rte_ring_count(mp->ring)*/0;
>>       if ((cache_count + common_count) > mp->size)
>>           common_count = mp->size - cache_count;
>>       fprintf(f, "  common_pool_count=%u\n", common_count);
>
> should it be rte_mempool_ext_get_count(mp) instead?
>

Done.

>
>
>> @@ -919,3 +909,111 @@ void rte_mempool_walk(void (*func)(const struct 
>> rte_mempool *, void *),
>>
>>       rte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK);
>>   }
>> +
>> +
>> +/* create the mempool using and external mempool manager */
>> +struct rte_mempool *
>> +rte_mempool_create_ext(const char *name, unsigned n, unsigned elt_size,
>> +            unsigned cache_size, unsigned private_data_size,
>> +            rte_mempool_ctor_t *mp_init, void *mp_init_arg,
>> +            rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
>> +            int socket_id, unsigned flags,
>> +            const char *handler_name)
>> +{
>
> I would have used one tab here for subsequent lines.

Done in v2

>
>> +    char mz_name[RTE_MEMZONE_NAMESIZE];
>> +    struct rte_mempool_list *mempool_list;
>> +    struct rte_mempool *mp = NULL;
>> +    struct rte_tailq_entry *te;
>> +    const struct rte_memzone *mz;
>> +    size_t mempool_size;
>> +    int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
>> +    int rg_flags = 0;
>> +    int16_t handler_idx;
>> +
>> +    mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, 
>> rte_mempool_list);
>> +
>> +    /* asked cache too big */
>> +    if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE ||
>> +        CALC_CACHE_FLUSHTHRESH(cache_size) > n) {
>> +        rte_errno = EINVAL;
>> +        return NULL;
>> +    }
>> +
>> +    handler_idx = rte_get_mempool_handler(handler_name);
>> +    if (handler_idx < 0) {
>> +        RTE_LOG(ERR, MEMPOOL, "Cannot find mempool handler by 
>> name!\n");
>> +        goto exit;
>> +    }
>> +
>> +    /* ring flags */
>> +    if (flags & MEMPOOL_F_SP_PUT)
>> +        rg_flags |= RING_F_SP_ENQ;
>> +    if (flags & MEMPOOL_F_SC_GET)
>> +        rg_flags |= RING_F_SC_DEQ;
>> +
>> ...
>
> I have the same comment than Jerin here. I think it should be
> factorized with rte_mempool_xmem_create() if possible. Maybe a
> at least a function rte_mempool_init() could be introduced, in
> the same model than rte_ring_init().

factorization done in v2.

>
>> diff --git a/lib/librte_mempool/rte_mempool.h 
>> b/lib/librte_mempool/rte_mempool.h
>> index 6e2390a..620cfb7 100644
>> --- a/lib/librte_mempool/rte_mempool.h
>> +++ b/lib/librte_mempool/rte_mempool.h
>> @@ -88,6 +88,8 @@ extern "C" {
>>   struct rte_mempool_debug_stats {
>>       uint64_t put_bulk;         /**< Number of puts. */
>>       uint64_t put_objs;         /**< Number of objects successfully 
>> put. */
>> +    uint64_t put_pool_bulk;    /**< Number of puts into pool. */
>> +    uint64_t put_pool_objs;    /**< Number of objects into pool. */
>>       uint64_t get_success_bulk; /**< Successful allocation number. */
>>       uint64_t get_success_objs; /**< Objects successfully allocated. */
>>       uint64_t get_fail_bulk;    /**< Failed allocation number. */
>
> I think the comment of put_pool_objs is not very clear.
> Shouldn't we have the same stats for get?
>

Not used, removed. Covered by put_bulk.

>
>> @@ -123,6 +125,7 @@ struct rte_mempool_objsz {
>>   #define RTE_MEMPOOL_NAMESIZE 32 /**< Maximum length of a memory 
>> pool. */
>>   #define RTE_MEMPOOL_MZ_PREFIX "MP_"
>>
>> +
>>   /* "MP_<name>" */
>>   #define    RTE_MEMPOOL_MZ_FORMAT    RTE_MEMPOOL_MZ_PREFIX "%s"
>>
>
> to be removed

Done in v2.

>
>> @@ -175,12 +178,85 @@ struct rte_mempool_objtlr {
>>   #endif
>>   };
>>
>> +/* Handler functions for external mempool support */
>> +typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp,
>> +        const char *name, unsigned n, int socket_id, unsigned flags);
>> +typedef int (*rte_mempool_put_t)(void *p,
>> +        void * const *obj_table, unsigned n);
>> +typedef int (*rte_mempool_get_t)(void *p, void **obj_table,
>> +        unsigned n);
>> +typedef unsigned (*rte_mempool_get_count)(void *p);
>> +typedef int(*rte_mempool_free_t)(struct rte_mempool *mp);
>
> a space is missing after 'int'.

   done in v2

>
>
>> +
>> +/**
>> + * @internal wrapper for external mempool manager alloc callback.
>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + * @param name
>> + *   Name of the statistics field to increment in the memory pool.
>> + * @param n
>> + *   Number to add to the object-oriented statistics.
>
> Are this comments correct?

Fixed in v2

>
>
>> + * @param socket_id
>> + *   socket id on which to allocate.
>> + * @param flags
>> + *   general flags to allocate function
>
> We could add that we are talking about MEMPOOL_F_* flags.
>
> By the way, the '@return' is missing in all declarations.
>

Will fix in v3

>
>> +/**
>> + * @internal wrapper for external mempool manager get_count callback.
>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + */
>> +int
>> +rte_mempool_ext_get_count(const struct rte_mempool *mp);
>
> should it be unsigned instead of int?
>

Yes. Will change.

>
>> +
>> +/**
>> + * @internal wrapper for external mempool manager free callback.
>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + */
>> +int
>> +rte_mempool_ext_free(struct rte_mempool *mp);
>> +
>>   /**
>>    * The RTE mempool structure.
>>    */
>>   struct rte_mempool {
>>       char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
>> -    struct rte_ring *ring;           /**< Ring to store objects. */
>>       phys_addr_t phys_addr;           /**< Phys. addr. of mempool 
>> struct. */
>>       int flags;                       /**< Flags of the mempool. */
>>       uint32_t size;                   /**< Size of the mempool. */
>> @@ -194,6 +270,11 @@ struct rte_mempool {
>>
>>       unsigned private_data_size;      /**< Size of private data. */
>>
>> +    /* Common pool data structure pointer */
>> +    void *rt_pool __rte_cache_aligned;
>
> What is the meaning of rt_pool?

I agree that it's probably not a very good name. Since it's basically 
the pointer which is used by the handlers callbacks, maybe we should 
call it mempool_storage? That leaves it generic enough that it can point 
at a ring, an array, or whatever else is needed for a particular handler.

>> +
>> +    int16_t handler_idx;
>> +
>
> I don't think I'm getting why an index is better than a pointer to
> the struct rte_mempool_handler. It would simplify the add_handler()
> function. See below for a detailed explaination.
>

As discussed in previous mails. It's to facilitate secondary processes.

>> @@ -223,6 +304,10 @@ struct rte_mempool {
>>   #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on 
>> cache lines.*/
>>   #define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is 
>> "single-producer".*/
>>   #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is 
>> "single-consumer".*/
>> +#define MEMPOOL_F_USE_STACK      0x0010 /**< Use a stack for the 
>> common pool. */
>
> Stack is not implemented in this commit. It should be moved in next
> commit.

Done in v2

>> +#define MEMPOOL_F_USE_TM         0x0020
>> +#define MEMPOOL_F_NO_SECONDARY   0x0040
>> +
>
> What are these flags?

Not needed. Part of temporary change. Removed.

>> @@ -728,7 +813,6 @@ rte_dom0_mempool_create(const char *name, 
>> unsigned n, unsigned elt_size,
>>           rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
>>           int socket_id, unsigned flags);
>>
>> -
>>   /**
>>    * Dump the status of the mempool to the console.
>>    *
>
> style

will fix in v3.

>
>
>> @@ -753,7 +837,7 @@ void rte_mempool_dump(FILE *f, const struct 
>> rte_mempool *mp);
>>    */
>>   static inline void __attribute__((always_inline))
>>   __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>> -            unsigned n, int is_mp)
>> +            unsigned n, __attribute__((unused)) int is_mp)
>
> You could use __rte_unused instead of __attribute__((unused))

will change in v3

>
>> @@ -769,8 +853,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * 
>> const *obj_table,
>>
>>   #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>>       /* cache is not enabled or single producer or non-EAL thread */
>> -    if (unlikely(cache_size == 0 || is_mp == 0 ||
>> -             lcore_id >= RTE_MAX_LCORE))
>> +    if (unlikely(cache_size == 0 || lcore_id >= RTE_MAX_LCORE))
>>           goto ring_enqueue;
>>
>>       /* Go straight to ring if put would overflow mem allocated for 
>> cache */
>
> If I understand well, we now always use the cache, even if the mempool
> is single-producer. I was wondering if it would have a performance
> impact... I suppose that using the cache is more efficient than the ring
> in single-producer mode, so it may increase performance. Do you have an
> idea of the impact here?

I've seen very little in performance gain, maybe a couple of percent for 
some tests, and up to 10% drop for some single core tests. I'll do some 
more specific testing based on SP versus MP.

>
> I think we could remove the parameter as the function is marked as
> internal. The comment above should also be fixed. The same comments
> apply to the get() functions.
>

will fix comments in v3, and see if we should remove is_mp based on more 
performance testing.

>
>> @@ -793,8 +876,8 @@ __mempool_put_bulk(struct rte_mempool *mp, void * 
>> const *obj_table,
>>
>>       cache->len += n;
>>
>> -    if (cache->len >= flushthresh) {
>> -        rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
>> +    if (unlikely(cache->len >= flushthresh)) {
>> +        rte_mempool_ext_put_bulk(mp, &cache->objs[cache_size],
>>                   cache->len - cache_size);
>
> Shouldn't we add a __MEMPOOL_STAT_ADD(mp, put_pool,
>   cache->len - cache_size) here ?
>

Correct. Added in v3.

>> @@ -954,8 +1025,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void 
>> **obj_table,
>>       uint32_t cache_size = mp->cache_size;
>>
>>       /* cache is not enabled or single consumer */
>> -    if (unlikely(cache_size == 0 || is_mc == 0 ||
>> -             n >= cache_size || lcore_id >= RTE_MAX_LCORE))
>> +    if (unlikely(cache_size == 0 || n >= cache_size ||
>> +                        lcore_id >= RTE_MAX_LCORE))
>
> incorrect indent

will fix in v3

>
>> @@ -967,7 +1038,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void 
>> **obj_table,
>>           uint32_t req = n + (cache_size - cache->len);
>>
>>           /* How many do we require i.e. number to fill the cache + 
>> the request */
>> -        ret = rte_ring_mc_dequeue_bulk(mp->ring, 
>> &cache->objs[cache->len], req);
>> +        ret = rte_mempool_ext_get_bulk(mp,
>> +                        &cache->objs[cache->len], req);
>
> indent

will fix in v3


>> +/**
>> + * Function to get an index to an external mempool manager
>> + *
>> + * @param name
>> + *   The name of the mempool handler to search for in the list of 
>> handlers
>> + * @return
>> + *   The index of the mempool handler in the list of registered mempool
>> + *   handlers
>> + */
>> +int16_t
>> +rte_get_mempool_handler(const char *name);
>
> I would prefer a function like this:
>
> const struct rte_mempool_handler *
> rte_get_mempool_handler(const char *name);
>
> (detailed explaination below)

Already discussed previously, index needed over pointer because of 
secondary processes.


>> diff --git a/lib/librte_mempool/rte_mempool_default.c 
>> b/lib/librte_mempool/rte_mempool_default.c
>> new file mode 100644
>> index 0000000..2493dc1
>> --- /dev/null
>> +++ b/lib/librte_mempool/rte_mempool_default.c
>> +#include "rte_mempool_internal.h"
>> +
>> +/*
>> + * Indirect jump table to support external memory pools
>> + */
>> +struct rte_mempool_handler_list mempool_handler_list = {
>> +    .sl =  RTE_SPINLOCK_INITIALIZER ,
>> +    .num_handlers = 0
>> +};
>> +
>> +/* TODO Convert to older mechanism of an array of stucts */
>> +int16_t
>> +add_handler(struct rte_mempool_handler *h)
>> +{
>> +    int16_t handler_idx;
>> +
>> +    /*  */
>> +    rte_spinlock_lock(&mempool_handler_list.sl);
>> +
>> +    /* Check whether jump table has space */
>> +    if (mempool_handler_list.num_handlers >= 
>> RTE_MEMPOOL_MAX_HANDLER_IDX) {
>> +        rte_spinlock_unlock(&mempool_handler_list.sl);
>> +        RTE_LOG(ERR, MEMPOOL,
>> +                "Maximum number of mempool handlers exceeded\n");
>> +        return -1;
>> +    }
>> +
>> +    if ((h->put == NULL) || (h->get == NULL) ||
>> +        (h->get_count == NULL)) {
>> +        rte_spinlock_unlock(&mempool_handler_list.sl);
>> +         RTE_LOG(ERR, MEMPOOL,
>> +                    "Missing callback while registering mempool 
>> handler\n");
>> +        return -1;
>> +    }
>> +
>> +    /* add new handler index */
>> +    handler_idx = mempool_handler_list.num_handlers++;
>> +
>> +    snprintf(mempool_handler_list.handler[handler_idx].name,
>> +                RTE_MEMPOOL_NAMESIZE, "%s", h->name);
>> +    mempool_handler_list.handler[handler_idx].alloc = h->alloc;
>> +    mempool_handler_list.handler[handler_idx].put = h->put;
>> +    mempool_handler_list.handler[handler_idx].get = h->get;
>> +    mempool_handler_list.handler[handler_idx].get_count = h->get_count;
>> +
>> +    rte_spinlock_unlock(&mempool_handler_list.sl);
>> +
>> +    return handler_idx;
>> +}
>
> Why not using a similar mechanism than what we have for PMDs?
>
>     void rte_eal_driver_register(struct rte_driver *driver)
>     {
>         TAILQ_INSERT_TAIL(&dev_driver_list, driver, next);
>     }
>
> To do that, you just need to add a TAILQ_ENTRY() in your
> rte_mempool_handler structure. This would avoid to duplicate the
> structure into a static array whose size is limited.
>
> Accessing to the callbacks would be easier:
>
>     return mp->mp_handler->put(mp->rt_pool, obj_table, n);
>
> instead of:
>
>     return (mempool_handler_list.handler[mp->handler_idx].put)
>                     (mp->rt_pool, obj_table, n);
>
> If we really want to copy the handlers somewhere, it could be in
> the mempool structure. It would avoid an extra dereference
> (note the first '.' instead of '->'):
>
>     return mp.mp_handler->put(mp->rt_pool, obj_table, n);
>
> After doing that, we could ask ourself if the wrappers are still
> useful or not. I would have say that they could be removed.
>
>
> The spinlock could be kept, although it may look a bit overkill:
> - I don't expect to have several loading at the same time
> - There is no unregister() function, so there is no risk to
>   browse the list atomically
>

Already discussed previously, index needed over pointer because of 
secondary processes.

> Last thing, I think this code should go in rte_mempool.c, not in
> rte_mempool_default.c.

I was trying to keep the default handlers together in their own file, 
rather than having them in with the mempool framework. I think it's 
better having them separate, and new handlers can go in their own files 
also. no?


>> +
>> +/* TODO Convert to older mechanism of an array of stucts */
>> +int16_t
>> +rte_get_mempool_handler(const char *name)
>> +{
>> +    int16_t i;
>> +
>> +    for (i = 0; i < mempool_handler_list.num_handlers; i++) {
>> +        if (!strcmp(name, mempool_handler_list.handler[i].name))
>> +            return i;
>> +    }
>> +    return -1;
>> +}
>
> This would be replaced by a TAILQ_FOREACH().

Already discussed previously, index needed over pointer because of 
secondary processes.

>
>> +static void *
>> +rte_mempool_common_ring_alloc(struct rte_mempool *mp,
>> +        const char *name, unsigned n, int socket_id, unsigned flags)
>> +{
>> +    struct rte_ring *r;
>> +    char rg_name[RTE_RING_NAMESIZE];
>> +    int rg_flags = 0;
>> +
>> +    if (flags & MEMPOOL_F_SP_PUT)
>> +        rg_flags |= RING_F_SP_ENQ;
>> +    if (flags & MEMPOOL_F_SC_GET)
>> +        rg_flags |= RING_F_SC_DEQ;
>> +
>> +    /* allocate the ring that will be used to store objects */
>> +    /* Ring functions will return appropriate errors if we are
>> +     * running as a secondary process etc., so no checks made
>> +     * in this function for that condition */
>> +    snprintf(rg_name, sizeof(rg_name), "%s-ring", name);
>> +    r = rte_ring_create(rg_name, rte_align32pow2(n+1), socket_id, 
>> rg_flags);
>> +    if (r == NULL)
>> +        return NULL;
>> +
>> +    mp->rt_pool = (void *)r;
>> +
>> +    return (void *) r;
>
> I don't think the explicit casts are required.

will change in v3

>
>> --- /dev/null
>> +++ b/lib/librte_mempool/rte_mempool_internal.h
>
> Is it the proper name?
> We could imagine a mempool handler provided by a plugin, and
> in this case this code should go in rte_mempool.h.

I was trying to keep the public APIs in rte_mempool.h, and aal the 
private stuff in rte_mempool_internal.h. Maybe a better name would be 
rte_mempool_private.h?

>> +
>> +struct rte_mempool_handler {
>> +    char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool handler */
>
> I would use a const char * here instead.
>

Would we then have to allocate the memory for the string elsewhere? I 
would have thought this is the more straightforward method.

>> +
>> +    rte_mempool_alloc_t alloc;
>> +
>> +    rte_mempool_put_t put __rte_cache_aligned;
>> +
>> +    rte_mempool_get_t get __rte_cache_aligned;
>> +
>> +    rte_mempool_get_count get_count __rte_cache_aligned;
>> +
>> +    rte_mempool_free_t free __rte_cache_aligned;
>> +};
>
> I agree with Jerin's comments. I don't think we should cache
> align each field. Maybe the whole structure.

Changed in v2.

>> +
>> +struct rte_mempool_handler_list {
>> +    rte_spinlock_t sl;          /**< Spinlock for add/delete. */
>> +
>> +    int32_t num_handlers;      /**< Number of handlers that are 
>> valid. */
>> +
>> +    /* storage for all possible handlers */
>> +    struct rte_mempool_handler handler[RTE_MEMPOOL_MAX_HANDLER_IDX];
>> +};
>> +
>> +int16_t add_handler(struct rte_mempool_handler *h);
>
> I think it should be called rte_mempool_register_handler().

Agreed, changed in v2.

>> +
>> +#define REGISTER_MEMPOOL_HANDLER(h) \
>> +static int16_t __attribute__((used)) testfn_##h(void);\
>> +int16_t __attribute__((constructor, used)) testfn_##h(void)\
>> +{\
>> +    return add_handler(&h);\
>> +}
>> +
>> +#endif
>>
>
>
>
> Regards,
> Olivier

Apologies for not addressing all of your comments for v2. I'll await 
your comments on the couple of outstanding questions above, then push up v3.
Mainly:
* change "rt_pool" to "mempool_storage"?
* change to const char * for mempool name, or leave as is.
* move all contents of rte_mempool_internal.h to rte_mempool.h, or leave 
as is.
* alternatively change name of rte_mempool_internal.h to 
rte_mempool_private.h
* I need to look into the performace of always using cache for single 
producer/consumer.

Thanks,
David.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 2/6] mempool: add stack (lifo) based external mempool handler
  2016-02-29 11:04       ` Hunt, David
@ 2016-03-04  9:04         ` Olivier MATZ
  0 siblings, 0 replies; 238+ messages in thread
From: Olivier MATZ @ 2016-03-04  9:04 UTC (permalink / raw)
  To: Hunt, David, dev

Hi David,

On 02/29/2016 12:04 PM, Hunt, David wrote:
> 
> On 2/19/2016 1:31 PM, Olivier MATZ wrote:
>> Hi David,
>>
>> On 02/16/2016 03:48 PM, David Hunt wrote:
>>> adds a simple stack based mempool handler
>>>
>>> Signed-off-by: David Hunt <david.hunt@intel.com>
>>> ---
>>>   lib/librte_mempool/Makefile            |   2 +-
>>>   lib/librte_mempool/rte_mempool.c       |   4 +-
>>>   lib/librte_mempool/rte_mempool.h       |   1 +
>>>   lib/librte_mempool/rte_mempool_stack.c | 164
>>> +++++++++++++++++++++++++++++++++
>>>   4 files changed, 169 insertions(+), 2 deletions(-)
>>>   create mode 100644 lib/librte_mempool/rte_mempool_stack.c
>>>
>> I don't get what is the purpose of this handler. Is it an example
>> or is it something that could be useful for dpdk applications?
>>
>> If it's an example, we should find a way to put the code outside
>> the librte_mempool library, maybe in the test program. I see there
>> is also a "custom handler". Do we really need to have both?
> They are both example handlers. I agree that we could reduce down to
> one, and since the 'custom' handler has autotests, I would suggest we
> keep that one.

ok

> The next question is where it should live. I agree that it's not ideal
> to have example code living in the same directory as the mempool
> library, but they are an integral part of the library itself. How about
> creating a handlers sub-directory? We could then keep all additional and
> sample handlers in there, away from the built-in handlers. Also, seeing
> as the handler code is intended to be part of the library, I think
> moving it out to the examples directory may confuse matters further.

I really don't think example code should go in the library. Either it
should go in dpdk/examples/ or in dpdk/app/test*.

>From your initial description: "The External Mempool Manager is an
extension to the mempool API that allows users to add and use an
external mempool manager, which allows external memory subsystems such
as external hardware memory management systems and software based
memory allocators to be used with DPDK."

Can we find a hardware where the external mempool manager is required?
This would be the best example ever I think.

Regards,
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 1/6] mempool: add external mempool manager support
  2016-02-29 11:11       ` Hunt, David
@ 2016-03-04  9:04         ` Olivier MATZ
  0 siblings, 0 replies; 238+ messages in thread
From: Olivier MATZ @ 2016-03-04  9:04 UTC (permalink / raw)
  To: Hunt, David, dev

Hi David,

On 02/29/2016 12:11 PM, Hunt, David wrote:
>> Also, I'm sorry but it seems that several comments or question I've made
>> in http://dpdk.org/ml/archives/dev/2016-February/032706.html are
>> not addressed.
>>
>> Examples:
>> - putting some part of the patch in separate commits
>> - meaning of "rt_pool"
>> - put_pool_bulk unclear comment
>> - should we also have get_pool_bulk stats?
>> - missing _MEMPOOL_STAT_ADD() in mempool_bulk()
>> - why internal in rte_mempool_internal.h?
>> - why default in rte_mempool_default.c?
>> - remaining references to stack handler (in a comment)
>> - ...?
>>
>> As you know, doing a proper code review takes a lot of time. If I
>> have to re-check all of my previous comments, it will take even
>> more. I'm not saying all my comments require a code change, but in case
>> you don't agree, please at least explain your opinion so we can debate
>> on the list.
>>
> Hi Olivier,
>    Sincerest apologies. I had intended in coming back around to your
> original comments after refactoring the code. I will do that now. I did
> take them into consideration, but I see now that I need to do further
> work, such as a clearer name for rt_pool, etc.  I will respond to your
> original email.

I thought some comments were ignored :)
So no problem in that case, thanks for clarifying.

Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 1/5] mempool: add external mempool manager support
  2016-03-01 13:32     ` Hunt, David
@ 2016-03-04  9:05       ` Olivier MATZ
  2016-03-08 10:04         ` Hunt, David
  0 siblings, 1 reply; 238+ messages in thread
From: Olivier MATZ @ 2016-03-04  9:05 UTC (permalink / raw)
  To: Hunt, David, dev

Hi David,

>>
>>> @@ -622,6 +607,10 @@ rte_mempool_xmem_create(const char *name,
>>> unsigned n, unsigned elt_size,
>>>
>>>       mp->elt_va_end = mp->elt_va_start;
>>>
>>> +    /* Parameters are setup. Call the mempool handler alloc */
>>> +    if ((rte_mempool_ext_alloc(mp, name, n, socket_id, flags)) == NULL)
>>> +        goto exit;
>>> +
>>
>> I think some memory needs to be freed here. At least 'te'.
> 
> Done in v2

Please note that in the meanwhile, this fix has been pushed (as we need
it for next release):
http://dpdk.org/browse/dpdk/commit/lib/librte_mempool/rte_mempool.c?id=86f36ff9578b5f3d697c8fcf6072dcb70e2b246f


>>> diff --git a/lib/librte_mempool/rte_mempool.h
>>> b/lib/librte_mempool/rte_mempool.h
>>> index 6e2390a..620cfb7 100644
>>> --- a/lib/librte_mempool/rte_mempool.h
>>> +++ b/lib/librte_mempool/rte_mempool.h
>>> @@ -88,6 +88,8 @@ extern "C" {
>>>   struct rte_mempool_debug_stats {
>>>       uint64_t put_bulk;         /**< Number of puts. */
>>>       uint64_t put_objs;         /**< Number of objects successfully
>>> put. */
>>> +    uint64_t put_pool_bulk;    /**< Number of puts into pool. */
>>> +    uint64_t put_pool_objs;    /**< Number of objects into pool. */
>>>       uint64_t get_success_bulk; /**< Successful allocation number. */
>>>       uint64_t get_success_objs; /**< Objects successfully allocated. */
>>>       uint64_t get_fail_bulk;    /**< Failed allocation number. */
>>
>> I think the comment of put_pool_objs is not very clear.
>> Shouldn't we have the same stats for get?
>>
> 
> Not used, removed. Covered by put_bulk.

I guess you mean it will be removed in v3? It is still there in the v2
(the field, not the comment that has been fixed).

Shouldn't we have the same stats for get?


>>>   /**
>>>    * The RTE mempool structure.
>>>    */
>>>   struct rte_mempool {
>>>       char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
>>> -    struct rte_ring *ring;           /**< Ring to store objects. */
>>>       phys_addr_t phys_addr;           /**< Phys. addr. of mempool
>>> struct. */
>>>       int flags;                       /**< Flags of the mempool. */
>>>       uint32_t size;                   /**< Size of the mempool. */
>>> @@ -194,6 +270,11 @@ struct rte_mempool {
>>>
>>>       unsigned private_data_size;      /**< Size of private data. */
>>>
>>> +    /* Common pool data structure pointer */
>>> +    void *rt_pool __rte_cache_aligned;
>>
>> What is the meaning of rt_pool?
> 
> I agree that it's probably not a very good name. Since it's basically
> the pointer which is used by the handlers callbacks, maybe we should
> call it mempool_storage? That leaves it generic enough that it can point
> at a ring, an array, or whatever else is needed for a particular handler.

My question was more about the "rt_" prefix. Maybe I missed something
obvious? I think "pool" or "pool_handler" is ok.


>>> @@ -769,8 +853,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void *
>>> const *obj_table,
>>>
>>>   #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>>>       /* cache is not enabled or single producer or non-EAL thread */
>>> -    if (unlikely(cache_size == 0 || is_mp == 0 ||
>>> -             lcore_id >= RTE_MAX_LCORE))
>>> +    if (unlikely(cache_size == 0 || lcore_id >= RTE_MAX_LCORE))
>>>           goto ring_enqueue;
>>>
>>>       /* Go straight to ring if put would overflow mem allocated for
>>> cache */
>>
>> If I understand well, we now always use the cache, even if the mempool
>> is single-producer. I was wondering if it would have a performance
>> impact... I suppose that using the cache is more efficient than the ring
>> in single-producer mode, so it may increase performance. Do you have an
>> idea of the impact here?
> 
> I've seen very little in performance gain, maybe a couple of percent for
> some tests, and up to 10% drop for some single core tests. I'll do some
> more specific testing based on SP versus MP.

OK thanks!


>>> diff --git a/lib/librte_mempool/rte_mempool_default.c
>>> b/lib/librte_mempool/rte_mempool_default.c
>>> new file mode 100644
>>> index 0000000..2493dc1
>>> --- /dev/null
>>> +++ b/lib/librte_mempool/rte_mempool_default.c
>>> +#include "rte_mempool_internal.h"
>>> +
>>> +/*
>>> + * Indirect jump table to support external memory pools
>>> + */
>>> +struct rte_mempool_handler_list mempool_handler_list = {
>>> +    .sl =  RTE_SPINLOCK_INITIALIZER ,
>>> +    .num_handlers = 0
>>> +};
>>> +
>>> +/* TODO Convert to older mechanism of an array of stucts */
>>> +int16_t
>>> +add_handler(struct rte_mempool_handler *h)
>>> +{
>>> +    int16_t handler_idx;
>>> +
>>> +    /*  */
>>> +    rte_spinlock_lock(&mempool_handler_list.sl);
>>> +
>>> +    /* Check whether jump table has space */
>>> +    if (mempool_handler_list.num_handlers >=
>>> RTE_MEMPOOL_MAX_HANDLER_IDX) {
>>> +        rte_spinlock_unlock(&mempool_handler_list.sl);
>>> +        RTE_LOG(ERR, MEMPOOL,
>>> +                "Maximum number of mempool handlers exceeded\n");
>>> +        return -1;
>>> +    }
>>> +
>>> +    if ((h->put == NULL) || (h->get == NULL) ||
>>> +        (h->get_count == NULL)) {
>>> +        rte_spinlock_unlock(&mempool_handler_list.sl);
>>> +         RTE_LOG(ERR, MEMPOOL,
>>> +                    "Missing callback while registering mempool
>>> handler\n");
>>> +        return -1;
>>> +    }
>>> +
>>> +    /* add new handler index */
>>> +    handler_idx = mempool_handler_list.num_handlers++;
>>> +
>>> +    snprintf(mempool_handler_list.handler[handler_idx].name,
>>> +                RTE_MEMPOOL_NAMESIZE, "%s", h->name);
>>> +    mempool_handler_list.handler[handler_idx].alloc = h->alloc;
>>> +    mempool_handler_list.handler[handler_idx].put = h->put;
>>> +    mempool_handler_list.handler[handler_idx].get = h->get;
>>> +    mempool_handler_list.handler[handler_idx].get_count = h->get_count;
>>> +
>>> +    rte_spinlock_unlock(&mempool_handler_list.sl);
>>> +
>>> +    return handler_idx;
>>> +}
>>
>> Why not using a similar mechanism than what we have for PMDs?
>>
>>     void rte_eal_driver_register(struct rte_driver *driver)
>>     {
>>         TAILQ_INSERT_TAIL(&dev_driver_list, driver, next);
>>     }
>>
>> To do that, you just need to add a TAILQ_ENTRY() in your
>> rte_mempool_handler structure. This would avoid to duplicate the
>> structure into a static array whose size is limited.
>>
>> Accessing to the callbacks would be easier:
>>
>>     return mp->mp_handler->put(mp->rt_pool, obj_table, n);
>>
>> instead of:
>>
>>     return (mempool_handler_list.handler[mp->handler_idx].put)
>>                     (mp->rt_pool, obj_table, n);
>>
>> If we really want to copy the handlers somewhere, it could be in
>> the mempool structure. It would avoid an extra dereference
>> (note the first '.' instead of '->'):
>>
>>     return mp.mp_handler->put(mp->rt_pool, obj_table, n);
>>
>> After doing that, we could ask ourself if the wrappers are still
>> useful or not. I would have say that they could be removed.
>>
>>
>> The spinlock could be kept, although it may look a bit overkill:
>> - I don't expect to have several loading at the same time
>> - There is no unregister() function, so there is no risk to
>>   browse the list atomically
>>
> 
> Already discussed previously, index needed over pointer because of
> secondary processes.

Could you add a comment stating this? It may help for next readers
to have this info.

>> Last thing, I think this code should go in rte_mempool.c, not in
>> rte_mempool_default.c.
> 
> I was trying to keep the default handlers together in their own file,
> rather than having them in with the mempool framework. I think it's
> better having them separate, and new handlers can go in their own files
> also. no?

OK for the following functions:
 common_ring_mp_put()
 common_ring_sp_put()
 common_ring_mc_get()
 common_ring_sc_get()
 common_ring_get_count()
 rte_mempool_common_ring_alloc()  (note: only this one has
       a rte_mempool prefix, maybe it should be fixed)

The other functions are part of the framework to add an external
handler, I don't think they should go in rte_mempool_default.c
They could either go in rte_mempool.c or in another file
rte_mempool_handler.c.


>>> --- /dev/null
>>> +++ b/lib/librte_mempool/rte_mempool_internal.h
>>
>> Is it the proper name?
>> We could imagine a mempool handler provided by a plugin, and
>> in this case this code should go in rte_mempool.h.
> 
> I was trying to keep the public APIs in rte_mempool.h, and aal the
> private stuff in rte_mempool_internal.h. Maybe a better name would be
> rte_mempool_private.h?

Are these functions internal? I mean, is it possible for an application
or an external PMD (.so) to provide its own handler? I think it would
be really interesting to have this capability.

Then, I would prefer to have this either in rte_mempool.h or in
rte_mempool_handler.h (that would be coherent with the .c)

>>> +
>>> +struct rte_mempool_handler {
>>> +    char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool handler */
>>
>> I would use a const char * here instead.
>>
> 
> Would we then have to allocate the memory for the string elsewhere? I
> would have thought this is the more straightforward method.

My initial question was, can we just have something like this:

	// my_handler.c
	static struct rte_mempool_handler handler_stack = {
		.name = "my_handler",
		.alloc = my_alloc,
		...
	};

	// rte_mempool_handler.c
	int16_t
	rte_mempool_register_handler(struct rte_mempool_handler *h)
	{
		...
		handler->name = h->name; /* instead of snprintf */
		...
	}

But it won't be possible as the structures will be shared with
a secondary process. So the name[RTE_MEMPOOL_NAMESIZE] is fine.



Regards,
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 1/5] mempool: add external mempool manager support
  2016-03-04  9:05       ` Olivier MATZ
@ 2016-03-08 10:04         ` Hunt, David
  0 siblings, 0 replies; 238+ messages in thread
From: Hunt, David @ 2016-03-08 10:04 UTC (permalink / raw)
  To: Olivier MATZ, dev

Hi Olivier,

On 3/4/2016 9:05 AM, Olivier MATZ wrote:
> Hi David,
>
>>>> @@ -622,6 +607,10 @@ rte_mempool_xmem_create(const char *name,
>>>> unsigned n, unsigned elt_size,
>>>>
>>>>        mp->elt_va_end = mp->elt_va_start;
>>>>
>>>> +    /* Parameters are setup. Call the mempool handler alloc */
>>>> +    if ((rte_mempool_ext_alloc(mp, name, n, socket_id, flags)) == NULL)
>>>> +        goto exit;
>>>> +
>>> I think some memory needs to be freed here. At least 'te'.
>> Done in v2
> Please note that in the meanwhile, this fix has been pushed (as we need
> it for next release):
> http://dpdk.org/browse/dpdk/commit/lib/librte_mempool/rte_mempool.c?id=86f36ff9578b5f3d697c8fcf6072dcb70e2b246f

v3 will be rebased on top of the latest head of the repo.


>>>> diff --git a/lib/librte_mempool/rte_mempool.h
>>>> b/lib/librte_mempool/rte_mempool.h
>>>> index 6e2390a..620cfb7 100644
>>>> --- a/lib/librte_mempool/rte_mempool.h
>>>> +++ b/lib/librte_mempool/rte_mempool.h
>>>> @@ -88,6 +88,8 @@ extern "C" {
>>>>    struct rte_mempool_debug_stats {
>>>>        uint64_t put_bulk;         /**< Number of puts. */
>>>>        uint64_t put_objs;         /**< Number of objects successfully
>>>> put. */
>>>> +    uint64_t put_pool_bulk;    /**< Number of puts into pool. */
>>>> +    uint64_t put_pool_objs;    /**< Number of objects into pool. */
>>>>        uint64_t get_success_bulk; /**< Successful allocation number. */
>>>>        uint64_t get_success_objs; /**< Objects successfully allocated. */
>>>>        uint64_t get_fail_bulk;    /**< Failed allocation number. */
>>> I think the comment of put_pool_objs is not very clear.
>>> Shouldn't we have the same stats for get?
>>>
>> Not used, removed. Covered by put_bulk.
> I guess you mean it will be removed in v3? It is still there in the v2
> (the field, not the comment that has been fixed).
>
> Shouldn't we have the same stats for get?

I believe get's are covered by the get_success_bulk and get_fail_bulk

>>>>    /**
>>>>     * The RTE mempool structure.
>>>>     */
>>>>    struct rte_mempool {
>>>>        char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
>>>> -    struct rte_ring *ring;           /**< Ring to store objects. */
>>>>        phys_addr_t phys_addr;           /**< Phys. addr. of mempool
>>>> struct. */
>>>>        int flags;                       /**< Flags of the mempool. */
>>>>        uint32_t size;                   /**< Size of the mempool. */
>>>> @@ -194,6 +270,11 @@ struct rte_mempool {
>>>>
>>>>        unsigned private_data_size;      /**< Size of private data. */
>>>>
>>>> +    /* Common pool data structure pointer */
>>>> +    void *rt_pool __rte_cache_aligned;
>>> What is the meaning of rt_pool?
>> I agree that it's probably not a very good name. Since it's basically
>> the pointer which is used by the handlers callbacks, maybe we should
>> call it mempool_storage? That leaves it generic enough that it can point
>> at a ring, an array, or whatever else is needed for a particular handler.
> My question was more about the "rt_" prefix. Maybe I missed something
> obvious? I think "pool" or "pool_handler" is ok.

v3 will use "pool"

>>>> -
>>>> +
>>>> +    /* add new handler index */
>>>> +    handler_idx = mempool_handler_list.num_handlers++;
>>>> +
>>>> +    snprintf(mempool_handler_list.handler[handler_idx].name,
>>>> +                RTE_MEMPOOL_NAMESIZE, "%s", h->name);
>>>> +    mempool_handler_list.handler[handler_idx].alloc = h->alloc;
>>>> +    mempool_handler_list.handler[handler_idx].put = h->put;
>>>> +    mempool_handler_list.handler[handler_idx].get = h->get;
>>>> +    mempool_handler_list.handler[handler_idx].get_count = h->get_count;
>>>> +
>>>> +    rte_spinlock_unlock(&mempool_handler_list.sl);
>>>> +
>>>> +    return handler_idx;
>>>> +}
>>> Why not using a similar mechanism than what we have for PMDs?
>>>
>>>      void rte_eal_driver_register(struct rte_driver *driver)
>>>      {
>>>          TAILQ_INSERT_TAIL(&dev_driver_list, driver, next);
>>>      }
>>>
>> Already discussed previously, index needed over pointer because of
>> secondary processes.
> Could you add a comment stating this? It may help for next readers
> to have this info.

v3: Comment added to the header file where we define header_idx 
explaining the use of an index versus pointers

>>> Last thing, I think this code should go in rte_mempool.c, not in
>>> rte_mempool_default.c.
>> I was trying to keep the default handlers together in their own file,
>> rather than having them in with the mempool framework. I think it's
>> better having them separate, and new handlers can go in their own files
>> also. no?
> OK for the following functions:
>   common_ring_mp_put()
>   common_ring_sp_put()
>   common_ring_mc_get()
>   common_ring_sc_get()
>   common_ring_get_count()
>   rte_mempool_common_ring_alloc()  (note: only this one has
>         a rte_mempool prefix, maybe it should be fixed)
>
> The other functions are part of the framework to add an external
> handler, I don't think they should go in rte_mempool_default.c
> They could either go in rte_mempool.c or in another file
> rte_mempool_handler.c.

v3: Agreed. The bulk of the v3 is simplification of the files.
All of the "common" callbacks are now in in rte_mempool_handler.c and 
rte_mempool_handler.h.
I've renamed the 'alloc' function above to be in line with the naming of 
the others.
The 'custom' handler has been banished to the autotest code, so as to 
keep the library as clean as possible.
What's interesting is that the autotest can have all the code defining 
the custom mempool (including it's registration), keeping the library 
free of user code.

>>>> --- /dev/null
>>>> +++ b/lib/librte_mempool/rte_mempool_internal.h
>>> Is it the proper name?
>>> We could imagine a mempool handler provided by a plugin, and
>>> in this case this code should go in rte_mempool.h.
>> I was trying to keep the public APIs in rte_mempool.h, and aal the
>> private stuff in rte_mempool_internal.h. Maybe a better name would be
>> rte_mempool_private.h?
> Are these functions internal? I mean, is it possible for an application
> or an external PMD (.so) to provide its own handler? I think it would
> be really interesting to have this capability.
>
> Then, I would prefer to have this either in rte_mempool.h or in
> rte_mempool_handler.h (that would be coherent with the .c)
>   

Now rte_mempool_handler.h


So to synopsise:

rte_mempool.[ch] - mempool create, populate, audit, dump, etc.
rte_mempool_handler.[ch] - handler registration, and fns to call callbacks
rte_mempool_default.c - default internal handlers sp/sc, mp/mc, etc.

custom handler has been moved out to app/test/test_ext_mempool.c

Thanks,
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 2/6] mempool: add stack (lifo) based external mempool handler
  2016-02-19 13:31     ` Olivier MATZ
  2016-02-29 11:04       ` Hunt, David
@ 2016-03-08 20:45       ` Venkatesan, Venky
  2016-03-09 14:53         ` Olivier MATZ
  1 sibling, 1 reply; 238+ messages in thread
From: Venkatesan, Venky @ 2016-03-08 20:45 UTC (permalink / raw)
  To: Olivier MATZ, Hunt, David, dev

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier MATZ
> Sent: Friday, February 19, 2016 5:31 AM
> To: Hunt, David <david.hunt@intel.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 2/6] mempool: add stack (lifo) based
> external mempool handler
> 
> Hi David,
> 
> On 02/16/2016 03:48 PM, David Hunt wrote:
> > adds a simple stack based mempool handler
> >
> > Signed-off-by: David Hunt <david.hunt@intel.com>
> > ---
> >  lib/librte_mempool/Makefile            |   2 +-
> >  lib/librte_mempool/rte_mempool.c       |   4 +-
> >  lib/librte_mempool/rte_mempool.h       |   1 +
> >  lib/librte_mempool/rte_mempool_stack.c | 164
> > +++++++++++++++++++++++++++++++++
> >  4 files changed, 169 insertions(+), 2 deletions(-)  create mode
> > 100644 lib/librte_mempool/rte_mempool_stack.c
> >
> 
> I don't get what is the purpose of this handler. Is it an example or is it
> something that could be useful for dpdk applications?
> 
This is actually something that is useful for pipelining apps, where the mempool cache doesn't really work - example, where we have one core doing rx (and alloc), and another core doing Tx (and return). In such a case, the mempool ring simply cycles through all the mbufs, resulting in a LLC miss on every mbuf allocated when the number of mbufs is large. A stack recycles buffers more effectively in this case.

> If it's an example, we should find a way to put the code outside the
> librte_mempool library, maybe in the test program. I see there is also a
> "custom handler". Do we really need to have both?
> 
> 
> Regards,
> Olivier
> 

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v3 0/4] external mempool manager
  2016-02-16 14:48 ` [PATCH 0/6] " David Hunt
                     ` (6 preceding siblings ...)
  2016-02-19 13:25   ` [PATCH 0/6] external mempool manager Olivier MATZ
@ 2016-03-09  9:50   ` David Hunt
  2016-03-09  9:50     ` [PATCH v3 1/4] mempool: add external mempool manager support David Hunt
                       ` (6 more replies)
  7 siblings, 7 replies; 238+ messages in thread
From: David Hunt @ 2016-03-09  9:50 UTC (permalink / raw)
  To: dev

Hi list.

Here's the v3 version patch for an external mempool manager

v3 changes:
 * simplified the file layout, renamed to rte_mempool_handler.[hc]
 * moved the default handlers into rte_mempool_default.c
 * moved the example handler out into app/test/test_ext_mempool.c
 * removed is_mc/is_mp change, slight perf degredation on sp cached operation
 * removed stack hanler, may re-introduce at a later date
 * Changes out of code reviews

v2 changes:
 * There was a lot of duplicate code between rte_mempool_xmem_create and
   rte_mempool_create_ext. This has now been refactored and is now
   hopefully cleaner.
 * The RTE_NEXT_ABI define is now used to allow building of the library
   in a format that is compatible with binaries built against previous
   versions of DPDK.
 * Changes out of code reviews. Hopefully I've got most of them included.

The External Mempool Manager is an extension to the mempool API that allows
users to add and use an external mempool manager, which allows external memory
subsystems such as external hardware memory management systems and software
based memory allocators to be used with DPDK.

The existing API to the internal DPDK mempool manager will remain unchanged
and will be backward compatible. However, there will be an ABI breakage, as
the mempool struct is changing. These changes are all contained withing
RTE_NEXT_ABI defs, and the current or next code can be changed with
the CONFIG_RTE_NEXT_ABI config setting

There are two aspects to external mempool manager.
  1. Adding the code for your new mempool handler. This is achieved by adding a
     new mempool handler source file into the librte_mempool library, and
     using the REGISTER_MEMPOOL_HANDLER macro.
  2. Using the new API to call rte_mempool_create_ext to create a new mempool
     using the name parameter to identify which handler to use.

New API calls added
 1. A new mempool 'create' function which accepts mempool handler name.
 2. A new mempool 'rte_get_mempool_handler' function which accepts mempool
    handler name, and returns the index to the relevant set of callbacks for
    that mempool handler

Several external mempool managers may be used in the same application. A new
mempool can then be created by using the new 'create' function, providing the
mempool handler name to point the mempool to the relevant mempool manager
callback structure.

The old 'create' function can still be called by legacy programs, and will
internally work out the mempool handle based on the flags provided (single
producer, single consumer, etc). By default handles are created internally to
implement the built-in DPDK mempool manager and mempool types.

The external mempool manager needs to provide the following functions.
 1. alloc     - allocates the mempool memory, and adds each object onto a ring
 2. put       - puts an object back into the mempool once an application has
                finished with it
 3. get       - gets an object from the mempool for use by the application
 4. get_count - gets the number of available objects in the mempool
 5. free      - frees the mempool memory

Every time a get/put/get_count is called from the application/PMD, the
callback for that mempool is called. These functions are in the fastpath,
and any unoptimised handlers may limit performance.

The new APIs are as follows:

1. rte_mempool_create_ext

struct rte_mempool *
rte_mempool_create_ext(const char * name, unsigned n,
        unsigned cache_size, unsigned private_data_size,
        int socket_id, unsigned flags,
        const char * handler_name);

2. rte_mempool_get_handler_name

char *
rte_mempool_get_handler_name(struct rte_mempool *mp);

Please see rte_mempool.h for further information on the parameters.


The important thing to note is that the mempool handler is passed by name
to rte_mempool_create_ext, and that in turn calls rte_get_mempool_handler to
get the handler index, which is stored in the rte_memool structure. This
allow multiple processes to use the same mempool, as the function pointers
are accessed via handler index.

The mempool handler structure contains callbacks to the implementation of
the handler, and is set up for registration as follows:

static struct rte_mempool_handler handler_sp_mc = {
    .name = "ring_sp_mc",
    .alloc = rte_mempool_common_ring_alloc,
    .put = common_ring_sp_put,
    .get = common_ring_mc_get,
    .get_count = common_ring_get_count,
    .free = common_ring_free,
};

And then the following macro will register the handler in the array of handlers

REGISTER_MEMPOOL_HANDLER(handler_mp_mc);

For and example of a simple malloc based mempool manager, see
lib/librte_mempool/custom_mempool.c

For an example of API usage, please see app/test/test_ext_mempool.c, which
implements a rudimentary mempool manager using simple mallocs for each
mempool object. This file also contains the callbacks and self registration
for the new handler.

David Hunt (4):
  mempool: add external mempool manager support
  mempool: add custom mempool handler example
  mempool: allow rte_pktmbuf_pool_create switch between memool handlers
  mempool: add in the RTE_NEXT_ABI for ABI breakages

 app/test/Makefile                          |   3 +
 app/test/test_ext_mempool.c                | 451 +++++++++++++++++++++++++++++
 app/test/test_mempool_perf.c               |   2 +
 config/common_base                         |   2 +
 lib/librte_mbuf/rte_mbuf.c                 |  15 +
 lib/librte_mempool/Makefile                |   5 +
 lib/librte_mempool/rte_mempool.c           | 389 +++++++++++++++++++++++--
 lib/librte_mempool/rte_mempool.h           | 224 +++++++++++++-
 lib/librte_mempool/rte_mempool_default.c   | 136 +++++++++
 lib/librte_mempool/rte_mempool_handler.c   | 140 +++++++++
 lib/librte_mempool/rte_mempool_handler.h   |  75 +++++
 lib/librte_mempool/rte_mempool_version.map |   1 +
 12 files changed, 1415 insertions(+), 28 deletions(-)
 create mode 100644 app/test/test_ext_mempool.c
 create mode 100644 lib/librte_mempool/rte_mempool_default.c
 create mode 100644 lib/librte_mempool/rte_mempool_handler.c
 create mode 100644 lib/librte_mempool/rte_mempool_handler.h

-- 
2.5.0

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v3 1/4] mempool: add external mempool manager support
  2016-03-09  9:50   ` [PATCH v3 0/4] " David Hunt
@ 2016-03-09  9:50     ` David Hunt
  2016-04-11 22:52       ` Yuanhan Liu
  2016-03-09  9:50     ` [PATCH v3 2/4] mempool: add custom mempool handler example David Hunt
                       ` (5 subsequent siblings)
  6 siblings, 1 reply; 238+ messages in thread
From: David Hunt @ 2016-03-09  9:50 UTC (permalink / raw)
  To: dev

Adds the new rte_mempool_create_ext api and callback mechanism for
external mempool handlers

Modifies the existing rte_mempool_create to set up the handler_idx to
the relevant mempool handler based on the handler name:
    ring_sp_sc
    ring_mp_mc
    ring_sp_mc
    ring_mp_sc

v3: Cleanup out of list review. Indents, comments, etc.
rebase on top of latest head, merged in change - fix leak
when creation fails: 86f36ff9578b5f3d697c8fcf6072dcb70e2b246f
split rte_mempool_default.c into itself and rte_mempool_handler.c
renamed rte_mempool_internal.h to rte_mempool_handler.h

v2: merges the duplicated code in rte_mempool_xmem_create and
rte_mempool_create_ext into one common function. The old functions
now call the new common function with the relevant parameters.

Signed-off-by: David Hunt <david.hunt@intel.com>
---
 app/test/test_mempool_perf.c               |   1 -
 lib/librte_mempool/Makefile                |   3 +
 lib/librte_mempool/rte_mempool.c           | 358 ++++++++++++++++++-----------
 lib/librte_mempool/rte_mempool.h           | 213 ++++++++++++++---
 lib/librte_mempool/rte_mempool_default.c   | 136 +++++++++++
 lib/librte_mempool/rte_mempool_handler.c   | 140 +++++++++++
 lib/librte_mempool/rte_mempool_handler.h   |  75 ++++++
 lib/librte_mempool/rte_mempool_version.map |   1 +
 8 files changed, 770 insertions(+), 157 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_default.c
 create mode 100644 lib/librte_mempool/rte_mempool_handler.c
 create mode 100644 lib/librte_mempool/rte_mempool_handler.h

diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index cdc02a0..091c1df 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
 							   n_get_bulk);
 				if (unlikely(ret < 0)) {
 					rte_mempool_dump(stdout, mp);
-					rte_ring_dump(stdout, mp->ring);
 					/* in this case, objects are lost... */
 					return -1;
 				}
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index a6898ef..a32c89e 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -42,6 +42,9 @@ LIBABIVER := 1
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_handler.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
+
 ifeq ($(CONFIG_RTE_LIBRTE_XEN_DOM0),y)
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_dom0_mempool.c
 endif
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index f8781e1..7342a7f 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -59,10 +59,11 @@
 #include <rte_spinlock.h>
 
 #include "rte_mempool.h"
+#include "rte_mempool_handler.h"
 
 TAILQ_HEAD(rte_mempool_list, rte_tailq_entry);
 
-static struct rte_tailq_elem rte_mempool_tailq = {
+struct rte_tailq_elem rte_mempool_tailq = {
 	.name = "RTE_MEMPOOL",
 };
 EAL_REGISTER_TAILQ(rte_mempool_tailq)
@@ -149,7 +150,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, uint32_t obj_idx,
 		obj_init(mp, obj_init_arg, obj, obj_idx);
 
 	/* enqueue in ring */
-	rte_ring_sp_enqueue(mp->ring, obj);
+	rte_mempool_ext_put_bulk(mp, &obj, 1);
 }
 
 uint32_t
@@ -420,117 +421,76 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
 }
 
 /*
+ * Common mempool create function.
  * Create the mempool over already allocated chunk of memory.
  * That external memory buffer can consists of physically disjoint pages.
  * Setting vaddr to NULL, makes mempool to fallback to original behaviour
- * and allocate space for mempool and it's elements as one big chunk of
- * physically continuos memory.
- * */
-struct rte_mempool *
-rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
+ * which will call rte_mempool_ext_alloc to allocate the object memory.
+ * If it is an intenal mempool handler, it will allocate space for mempool
+ * and it's elements as one big chunk of physically continuous memory.
+ * If it is an external mempool handler, it will allocate space for mempool
+ * and call the rte_mempool_ext_alloc for the object memory.
+ */
+static struct rte_mempool *
+mempool_create(const char *name,
+		unsigned num_elt, unsigned elt_size,
 		unsigned cache_size, unsigned private_data_size,
 		rte_mempool_ctor_t *mp_init, void *mp_init_arg,
 		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
-		int socket_id, unsigned flags, void *vaddr,
-		const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift)
+		int socket_id, unsigned flags,
+		void *vaddr, const phys_addr_t paddr[],
+		uint32_t pg_num, uint32_t pg_shift,
+		const char *handler_name)
 {
-	char mz_name[RTE_MEMZONE_NAMESIZE];
-	char rg_name[RTE_RING_NAMESIZE];
+	const struct rte_memzone *mz;
 	struct rte_mempool_list *mempool_list;
 	struct rte_mempool *mp = NULL;
 	struct rte_tailq_entry *te = NULL;
-	struct rte_ring *r = NULL;
-	const struct rte_memzone *mz;
-	size_t mempool_size;
+	char mz_name[RTE_MEMZONE_NAMESIZE];
 	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
-	int rg_flags = 0;
-	void *obj;
 	struct rte_mempool_objsz objsz;
-	void *startaddr;
+	void *startaddr = NULL;
 	int page_size = getpagesize();
-
-	/* compilation-time checks */
-	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool) &
-			  RTE_CACHE_LINE_MASK) != 0);
-#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
-	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_cache) &
-			  RTE_CACHE_LINE_MASK) != 0);
-	RTE_BUILD_BUG_ON((offsetof(struct rte_mempool, local_cache) &
-			  RTE_CACHE_LINE_MASK) != 0);
-#endif
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_debug_stats) &
-			  RTE_CACHE_LINE_MASK) != 0);
-	RTE_BUILD_BUG_ON((offsetof(struct rte_mempool, stats) &
-			  RTE_CACHE_LINE_MASK) != 0);
-#endif
+	void *obj = NULL;
+	size_t mempool_size;
 
 	mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list);
 
 	/* asked cache too big */
 	if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE ||
-	    CALC_CACHE_FLUSHTHRESH(cache_size) > n) {
-		rte_errno = EINVAL;
-		return NULL;
-	}
-
-	/* check that we have both VA and PA */
-	if (vaddr != NULL && paddr == NULL) {
-		rte_errno = EINVAL;
-		return NULL;
-	}
-
-	/* Check that pg_num and pg_shift parameters are valid. */
-	if (pg_num < RTE_DIM(mp->elt_pa) || pg_shift > MEMPOOL_PG_SHIFT_MAX) {
+		CALC_CACHE_FLUSHTHRESH(cache_size) > num_elt) {
 		rte_errno = EINVAL;
 		return NULL;
 	}
 
-	/* "no cache align" imply "no spread" */
-	if (flags & MEMPOOL_F_NO_CACHE_ALIGN)
-		flags |= MEMPOOL_F_NO_SPREAD;
+	if (flags && MEMPOOL_F_INT_HANDLER) {
+		/* Check that pg_num and pg_shift parameters are valid. */
+		if (pg_num < RTE_DIM(mp->elt_pa) ||
+				pg_shift > MEMPOOL_PG_SHIFT_MAX) {
+			rte_errno = EINVAL;
+			return NULL;
+		}
 
-	/* ring flags */
-	if (flags & MEMPOOL_F_SP_PUT)
-		rg_flags |= RING_F_SP_ENQ;
-	if (flags & MEMPOOL_F_SC_GET)
-		rg_flags |= RING_F_SC_DEQ;
+		/* "no cache align" imply "no spread" */
+		if (flags & MEMPOOL_F_NO_CACHE_ALIGN)
+			flags |= MEMPOOL_F_NO_SPREAD;
 
-	/* calculate mempool object sizes. */
-	if (!rte_mempool_calc_obj_size(elt_size, flags, &objsz)) {
-		rte_errno = EINVAL;
-		return NULL;
+		/* calculate mempool object sizes. */
+		if (!rte_mempool_calc_obj_size(elt_size, flags, &objsz)) {
+			rte_errno = EINVAL;
+			return NULL;
+		}
 	}
 
 	rte_rwlock_write_lock(RTE_EAL_MEMPOOL_RWLOCK);
 
-	/* allocate the ring that will be used to store objects */
-	/* Ring functions will return appropriate errors if we are
-	 * running as a secondary process etc., so no checks made
-	 * in this function for that condition */
-	snprintf(rg_name, sizeof(rg_name), RTE_MEMPOOL_MZ_FORMAT, name);
-	r = rte_ring_create(rg_name, rte_align32pow2(n+1), socket_id, rg_flags);
-	if (r == NULL)
-		goto exit_unlock;
-
 	/*
 	 * reserve a memory zone for this mempool: private data is
 	 * cache-aligned
 	 */
-	private_data_size = (private_data_size +
-			     RTE_MEMPOOL_ALIGN_MASK) & (~RTE_MEMPOOL_ALIGN_MASK);
+	private_data_size = RTE_ALIGN_CEIL(private_data_size,
+						RTE_MEMPOOL_ALIGN);
 
-	if (! rte_eal_has_hugepages()) {
-		/*
-		 * expand private data size to a whole page, so that the
-		 * first pool element will start on a new standard page
-		 */
-		int head = sizeof(struct rte_mempool);
-		int new_size = (private_data_size + head) % page_size;
-		if (new_size) {
-			private_data_size += page_size - new_size;
-		}
-	}
 
 	/* try to allocate tailq entry */
 	te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0);
@@ -539,23 +499,51 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 		goto exit_unlock;
 	}
 
-	/*
-	 * If user provided an external memory buffer, then use it to
-	 * store mempool objects. Otherwise reserve a memzone that is large
-	 * enough to hold mempool header and metadata plus mempool objects.
-	 */
-	mempool_size = MEMPOOL_HEADER_SIZE(mp, pg_num) + private_data_size;
-	mempool_size = RTE_ALIGN_CEIL(mempool_size, RTE_MEMPOOL_ALIGN);
-	if (vaddr == NULL)
-		mempool_size += (size_t)objsz.total_size * n;
+	if (flags && MEMPOOL_F_INT_HANDLER) {
+
+		if (!rte_eal_has_hugepages()) {
+			/*
+			 * expand private data size to a whole page, so that the
+			 * first pool element will start on a new standard page
+			 */
+			int head = sizeof(struct rte_mempool);
+			int new_size = (private_data_size + head) % page_size;
+
+			if (new_size)
+				private_data_size += page_size - new_size;
+		}
+
 
-	if (! rte_eal_has_hugepages()) {
 		/*
-		 * we want the memory pool to start on a page boundary,
-		 * because pool elements crossing page boundaries would
-		 * result in discontiguous physical addresses
+		 * If user provided an external memory buffer, then use it to
+		 * store mempool objects. Otherwise reserve a memzone that is
+		 * large enough to hold mempool header and metadata plus
+		 * mempool objects
 		 */
-		mempool_size += page_size;
+		mempool_size =
+			MEMPOOL_HEADER_SIZE(mp, pg_num) + private_data_size;
+		mempool_size =
+			RTE_ALIGN_CEIL(mempool_size, RTE_MEMPOOL_ALIGN);
+		if (vaddr == NULL)
+			mempool_size += (size_t)objsz.total_size * num_elt;
+
+		if (!rte_eal_has_hugepages()) {
+			/*
+			 * we want the memory pool to start on a page boundary,
+			 * because pool elements crossing page boundaries would
+			 * result in discontiguous physical addresses
+			 */
+			mempool_size += page_size;
+		}
+	} else {
+		/*
+		 * If user provided an external memory buffer, then use it to
+		 * store mempool objects. Otherwise reserve a memzone that is
+		 * large enough to hold mempool header and metadata plus
+		 * mempool objects
+		 */
+		mempool_size = sizeof(*mp) + private_data_size;
+		mempool_size = RTE_ALIGN_CEIL(mempool_size, RTE_MEMPOOL_ALIGN);
 	}
 
 	snprintf(mz_name, sizeof(mz_name), RTE_MEMPOOL_MZ_FORMAT, name);
@@ -564,16 +552,22 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 	if (mz == NULL)
 		goto exit_unlock;
 
-	if (rte_eal_has_hugepages()) {
-		startaddr = (void*)mz->addr;
-	} else {
-		/* align memory pool start address on a page boundary */
-		unsigned long addr = (unsigned long)mz->addr;
-		if (addr & (page_size - 1)) {
-			addr += page_size;
-			addr &= ~(page_size - 1);
+	if (flags && MEMPOOL_F_INT_HANDLER) {
+
+		if (rte_eal_has_hugepages()) {
+			startaddr = (void *)mz->addr;
+		} else {
+			/* align memory pool start address on a page boundary */
+			unsigned long addr = (unsigned long)mz->addr;
+
+			if (addr & (page_size - 1)) {
+				addr += page_size;
+				addr &= ~(page_size - 1);
+			}
+			startaddr = (void *)addr;
 		}
-		startaddr = (void*)addr;
+	} else {
+		startaddr = (void *)mz->addr;
 	}
 
 	/* init the mempool structure */
@@ -581,8 +575,7 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 	memset(mp, 0, sizeof(*mp));
 	snprintf(mp->name, sizeof(mp->name), "%s", name);
 	mp->phys_addr = mz->phys_addr;
-	mp->ring = r;
-	mp->size = n;
+	mp->size = num_elt;
 	mp->flags = flags;
 	mp->elt_size = objsz.elt_size;
 	mp->header_size = objsz.header_size;
@@ -591,35 +584,52 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 	mp->cache_flushthresh = CALC_CACHE_FLUSHTHRESH(cache_size);
 	mp->private_data_size = private_data_size;
 
-	/* calculate address of the first element for continuous mempool. */
-	obj = (char *)mp + MEMPOOL_HEADER_SIZE(mp, pg_num) +
-		private_data_size;
-	obj = RTE_PTR_ALIGN_CEIL(obj, RTE_MEMPOOL_ALIGN);
-
-	/* populate address translation fields. */
-	mp->pg_num = pg_num;
-	mp->pg_shift = pg_shift;
-	mp->pg_mask = RTE_LEN2MASK(mp->pg_shift, typeof(mp->pg_mask));
+	mp->handler_idx = rte_get_mempool_handler_idx(handler_name);
+	if (mp->handler_idx < 0) {
+		RTE_LOG(ERR, MEMPOOL, "Cannot find mempool handler by name!\n");
+		goto exit_unlock;
+	}
 
-	/* mempool elements allocated together with mempool */
-	if (vaddr == NULL) {
-		mp->elt_va_start = (uintptr_t)obj;
-		mp->elt_pa[0] = mp->phys_addr +
-			(mp->elt_va_start - (uintptr_t)mp);
+	if (flags && MEMPOOL_F_INT_HANDLER) {
+		/* calculate address of first element for continuous mempool. */
+		obj = (char *)mp + MEMPOOL_HEADER_SIZE(mp, pg_num) +
+			private_data_size;
+		obj = RTE_PTR_ALIGN_CEIL(obj, RTE_MEMPOOL_ALIGN);
+
+		/* populate address translation fields. */
+		mp->pg_num = pg_num;
+		mp->pg_shift = pg_shift;
+		mp->pg_mask = RTE_LEN2MASK(mp->pg_shift, typeof(mp->pg_mask));
+
+		/* mempool elements allocated together with mempool */
+		if (vaddr == NULL) {
+			mp->elt_va_start = (uintptr_t)obj;
+			mp->elt_pa[0] = mp->phys_addr +
+				(mp->elt_va_start - (uintptr_t)mp);
+		/* mempool elements in a separate chunk of memory. */
+		} else {
+			mp->elt_va_start = (uintptr_t)vaddr;
+			memcpy(mp->elt_pa, paddr,
+				sizeof(mp->elt_pa[0]) * pg_num);
+		}
 
-	/* mempool elements in a separate chunk of memory. */
-	} else {
-		mp->elt_va_start = (uintptr_t)vaddr;
-		memcpy(mp->elt_pa, paddr, sizeof (mp->elt_pa[0]) * pg_num);
+		mp->elt_va_end = mp->elt_va_start;
 	}
 
-	mp->elt_va_end = mp->elt_va_start;
+	/* Parameters are setup. Call the mempool handler alloc */
+	mp->pool =
+		rte_mempool_ext_alloc(mp, name, num_elt, socket_id, flags);
+	if (mp->pool == NULL) {
+		RTE_LOG(ERR, MEMPOOL, "Failed to alloc mempool!\n");
+		goto exit_unlock;
+	}
 
 	/* call the initializer */
 	if (mp_init)
 		mp_init(mp, mp_init_arg);
 
-	mempool_populate(mp, n, 1, obj_init, obj_init_arg);
+	if (obj_init)
+		mempool_populate(mp, num_elt, 1, obj_init, obj_init_arg);
 
 	te->data = (void *) mp;
 
@@ -632,19 +642,83 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 
 exit_unlock:
 	rte_rwlock_write_unlock(RTE_EAL_MEMPOOL_RWLOCK);
-	rte_ring_free(r);
 	rte_free(te);
 
 	return NULL;
 }
 
+/* Create the mempool over already allocated chunk of memory */
+struct rte_mempool *
+rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
+		unsigned cache_size, unsigned private_data_size,
+		rte_mempool_ctor_t *mp_init, void *mp_init_arg,
+		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
+		int socket_id, unsigned flags, void *vaddr,
+		const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift)
+{
+	struct rte_mempool *mp = NULL;
+	char handler_name[RTE_MEMPOOL_NAMESIZE];
+
+
+	/* compilation-time checks */
+	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool) &
+			  RTE_CACHE_LINE_MASK) != 0);
+#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
+	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_cache) &
+			  RTE_CACHE_LINE_MASK) != 0);
+	RTE_BUILD_BUG_ON((offsetof(struct rte_mempool, local_cache) &
+			  RTE_CACHE_LINE_MASK) != 0);
+#endif
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_debug_stats) &
+			  RTE_CACHE_LINE_MASK) != 0);
+	RTE_BUILD_BUG_ON((offsetof(struct rte_mempool, stats) &
+			  RTE_CACHE_LINE_MASK) != 0);
+#endif
+
+
+	/* check that we have both VA and PA */
+	if (vaddr != NULL && paddr == NULL) {
+		rte_errno = EINVAL;
+		return NULL;
+	}
+
+	/*
+	 * Since we have 4 combinations of the SP/SC/MP/MC, and stack,
+	 * examine the flags to set the correct index into the handler table.
+	 */
+	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
+		sprintf(handler_name, "%s", "ring_sp_sc");
+	else if (flags & MEMPOOL_F_SP_PUT)
+		sprintf(handler_name, "%s", "ring_sp_mc");
+	else if (flags & MEMPOOL_F_SC_GET)
+		sprintf(handler_name, "%s", "ring_mp_sc");
+	else
+		sprintf(handler_name, "%s", "ring_mp_mc");
+
+	flags |= MEMPOOL_F_INT_HANDLER;
+
+	mp = mempool_create(name,
+		n, elt_size,
+		cache_size, private_data_size,
+		mp_init, mp_init_arg,
+		obj_init, obj_init_arg,
+		socket_id,
+		flags,
+		vaddr, paddr,
+		pg_num, pg_shift,
+		handler_name);
+
+	return mp;
+}
+
 /* Return the number of entries in the mempool */
 unsigned
 rte_mempool_count(const struct rte_mempool *mp)
 {
 	unsigned count;
 
-	count = rte_ring_count(mp->ring);
+	count = rte_mempool_ext_get_count(mp);
 
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
 	{
@@ -800,7 +874,6 @@ rte_mempool_dump(FILE *f, const struct rte_mempool *mp)
 
 	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
 	fprintf(f, "  flags=%x\n", mp->flags);
-	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
 	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->phys_addr);
 	fprintf(f, "  size=%"PRIu32"\n", mp->size);
 	fprintf(f, "  header_size=%"PRIu32"\n", mp->header_size);
@@ -823,7 +896,7 @@ rte_mempool_dump(FILE *f, const struct rte_mempool *mp)
 			mp->size);
 
 	cache_count = rte_mempool_dump_cache(f, mp);
-	common_count = rte_ring_count(mp->ring);
+	common_count = rte_mempool_ext_get_count(mp);
 	if ((cache_count + common_count) > mp->size)
 		common_count = mp->size - cache_count;
 	fprintf(f, "  common_pool_count=%u\n", common_count);
@@ -917,3 +990,30 @@ void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *),
 
 	rte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK);
 }
+
+
+/* create the mempool using an external mempool manager */
+struct rte_mempool *
+rte_mempool_create_ext(const char *name, unsigned n, unsigned elt_size,
+	unsigned cache_size, unsigned private_data_size,
+	rte_mempool_ctor_t *mp_init, void *mp_init_arg,
+	rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
+	int socket_id, unsigned flags,
+	const char *handler_name)
+{
+	struct rte_mempool *mp = NULL;
+
+	mp = mempool_create(name,
+		n, elt_size,
+		cache_size, private_data_size,
+		mp_init, mp_init_arg,
+		obj_init, obj_init_arg,
+		socket_id, flags,
+		NULL, NULL,              /* vaddr, paddr */
+		0, 0,                    /* pg_num, pg_shift, */
+		handler_name);
+
+	return mp;
+
+
+}
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 9745bf0..f987d8a 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -175,12 +175,93 @@ struct rte_mempool_objtlr {
 #endif
 };
 
+/* Handler functions for external mempool support */
+typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp,
+		const char *name, unsigned n, int socket_id, unsigned flags);
+typedef int (*rte_mempool_put_t)(void *p,
+		void * const *obj_table, unsigned n);
+typedef int (*rte_mempool_get_t)(void *p, void **obj_table,
+		unsigned n);
+typedef unsigned (*rte_mempool_get_count)(void *p);
+typedef int (*rte_mempool_free_t)(struct rte_mempool *mp);
+
+/**
+ * @internal wrapper for external mempool manager alloc callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param name
+ *   Name of the memory pool.
+ * @param n
+ *   Number of objects in the mempool.
+ * @param socket_id
+ *   socket id on which to allocate.
+ * @param flags
+ *   general flags to allocate function (MEMPOOL_F_* flags)
+ */
+void *
+rte_mempool_ext_alloc(struct rte_mempool *mp,
+		const char *name, unsigned n, int socket_id, unsigned flags);
+
+/**
+ * @internal wrapper for external mempool manager get callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *	 Number of objects to get
+ * @return
+ *   - >=0: Success; got n number of objects
+ *   - <0: Error; code of handler get function.
+ */
+int
+rte_mempool_ext_get_bulk(struct rte_mempool *mp, void **obj_table,
+		unsigned n);
+
+/**
+ * @internal wrapper for external mempool manager put callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to put
+ * @return
+ *   - >=0: Success; number of objects supplied.
+ *   - <0: Error; code of handler put function.
+ */
+int
+rte_mempool_ext_put_bulk(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n);
+
+/**
+ * @internal wrapper for external mempool manager get_count callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ */
+unsigned
+rte_mempool_ext_get_count(const struct rte_mempool *mp);
+
+/**
+ * @internal wrapper for external mempool manager free callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   The number of objects available in the mempool.
+ */
+int
+rte_mempool_ext_free(struct rte_mempool *mp);
+
 /**
  * The RTE mempool structure.
  */
 struct rte_mempool {
 	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
-	struct rte_ring *ring;           /**< Ring to store objects. */
 	phys_addr_t phys_addr;           /**< Phys. addr. of mempool struct. */
 	int flags;                       /**< Flags of the mempool. */
 	uint32_t size;                   /**< Size of the mempool. */
@@ -194,6 +275,18 @@ struct rte_mempool {
 
 	unsigned private_data_size;      /**< Size of private data. */
 
+	/* Common pool data structure pointer */
+	void *pool;
+
+	/*
+	 * Index into the array of structs containing callback fn pointers.
+	 * We're using an index here rather than pointers to the callbacks
+	 * to facilitate any secondary processes that may want to use
+	 * this mempool. Any function pointers stored in the mempool
+	 * directly would not be valid for secondary processes.
+	 */
+	int16_t handler_idx;
+
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
 	/** Per-lcore local cache. */
 	struct rte_mempool_cache local_cache[RTE_MAX_LCORE];
@@ -223,6 +316,8 @@ struct rte_mempool {
 #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
 #define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
+#define MEMPOOL_F_INT_HANDLER    0x0020 /**< Using internal mempool handler */
+
 
 /**
  * @internal When debug is enabled, store some statistics.
@@ -728,7 +823,6 @@ rte_dom0_mempool_create(const char *name, unsigned n, unsigned elt_size,
 		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
 		int socket_id, unsigned flags);
 
-
 /**
  * Dump the status of the mempool to the console.
  *
@@ -753,7 +847,7 @@ void rte_mempool_dump(FILE *f, const struct rte_mempool *mp);
  */
 static inline void __attribute__((always_inline))
 __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
-		    unsigned n, int is_mp)
+		    unsigned n, __rte_unused int is_mp)
 {
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
 	struct rte_mempool_cache *cache;
@@ -793,10 +887,15 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 
 	cache->len += n;
 
-	if (cache->len >= flushthresh) {
-		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
+	if (unlikely(cache->len >= flushthresh)) {
+		rte_mempool_ext_put_bulk(mp, &cache->objs[cache_size],
 				cache->len - cache_size);
 		cache->len = cache_size;
+		/*
+		 * Increment stats counter to tell us how many pool puts
+		 * happened
+		 */
+		__MEMPOOL_STAT_ADD(mp, put_pool, n);
 	}
 
 	return;
@@ -804,22 +903,10 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 ring_enqueue:
 #endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
 
-	/* push remaining objects in ring */
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	if (is_mp) {
-		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
-	else {
-		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
-#else
-	if (is_mp)
-		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
-	else
-		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
-#endif
+	/* Increment stats counter to tell us how many pool puts happened */
+	__MEMPOOL_STAT_ADD(mp, put_pool, n);
+
+	rte_mempool_ext_put_bulk(mp, obj_table, n);
 }
 
 
@@ -943,7 +1030,7 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
  */
 static inline int __attribute__((always_inline))
 __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
-		   unsigned n, int is_mc)
+		   unsigned n, __attribute__((unused))int is_mc)
 {
 	int ret;
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
@@ -967,7 +1054,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 		uint32_t req = n + (cache_size - cache->len);
 
 		/* How many do we require i.e. number to fill the cache + the request */
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
+		ret = rte_mempool_ext_get_bulk(mp,
+			&cache->objs[cache->len], req);
 		if (unlikely(ret < 0)) {
 			/*
 			 * In the offchance that we are buffer constrained,
@@ -995,10 +1083,7 @@ ring_dequeue:
 #endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
 
 	/* get remaining objects from ring */
-	if (is_mc)
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
-	else
-		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
+	ret = rte_mempool_ext_get_bulk(mp, obj_table, n);
 
 	if (ret < 0)
 		__MEMPOOL_STAT_ADD(mp, get_fail, n);
@@ -1401,6 +1486,80 @@ ssize_t rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz,
 void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *arg),
 		      void *arg);
 
+/**
+ * Function to get the name of a mempool handler
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @return
+ *   The name of the mempool handler
+ */
+char *rte_mempool_get_handler_name(struct rte_mempool *mp);
+
+/**
+ * Create a new mempool named *name* in memory.
+ *
+ * This function uses an externally defined alloc callback to allocate memory.
+ * Its size is set to n elements.
+ * All elements of the mempool are allocated separately to the mempool header.
+ *
+ * @param name
+ *   The name of the mempool.
+ * @param n
+ *   The number of elements in the mempool. The optimum size (in terms of
+ *   memory usage) for a mempool is when n is a power of two minus one:
+ *   n = (2^q - 1).
+ * @param cache_size
+ *   If cache_size is non-zero, the rte_mempool library will try to
+ *   limit the accesses to the common lockless pool, by maintaining a
+ *   per-lcore object cache. This argument must be lower or equal to
+ *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5. It is advised to choose
+ *   cache_size to have "n modulo cache_size == 0": if this is
+ *   not the case, some elements will always stay in the pool and will
+ *   never be used. The access to the per-lcore table is of course
+ *   faster than the multi-producer/consumer pool. The cache can be
+ *   disabled if the cache_size argument is set to 0; it can be useful to
+ *   avoid losing objects in cache. Note that even if not used, the
+ *   memory space for cache is always reserved in a mempool structure,
+ *   except if CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE is set to 0.
+ * @param private_data_size
+ *   The size of the private data appended after the mempool
+ *   structure. This is useful for storing some private data after the
+ *   mempool structure, as is done for rte_mbuf_pool for example.
+ * @param mp_init
+ *   A function pointer that is called for initialization of the pool,
+ *   before object initialization. The user can initialize the private
+ *   data in this function if needed. This parameter can be NULL if
+ *   not needed.
+ * @param mp_init_arg
+ *   An opaque pointer to data that can be used in the mempool
+ *   constructor function.
+ * @param obj_init
+ *   A function pointer that is called for each object at
+ *   initialization of the pool. The user can set some meta data in
+ *   objects if needed. This parameter can be NULL if not needed.
+ *   The obj_init() function takes the mempool pointer, the init_arg,
+ *   the object pointer and the object number as parameters.
+ * @param obj_init_arg
+ *   An opaque pointer to data that can be used as an argument for
+ *   each call to the object constructor function.
+ * @param socket_id
+ *   The *socket_id* argument is the socket identifier in the case of
+ *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ *   constraint for the reserved zone.
+ * @param flags
+ *   general flags to allocate function (MEMPOOL_F_* flags)
+ * @return
+ *   The pointer to the new allocated mempool, on success. NULL on error
+ */
+struct rte_mempool *
+rte_mempool_create_ext(const char *name, unsigned n, unsigned elt_size,
+		unsigned cache_size, unsigned private_data_size,
+		rte_mempool_ctor_t *mp_init, void *mp_init_arg,
+		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
+		int socket_id, unsigned flags,
+		const char *handler_name);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_mempool/rte_mempool_default.c b/lib/librte_mempool/rte_mempool_default.c
new file mode 100644
index 0000000..3852be1
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_default.c
@@ -0,0 +1,136 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <string.h>
+
+#include "rte_mempool.h"
+#include "rte_mempool_handler.h"
+
+static int
+common_ring_mp_put(void *p, void * const *obj_table, unsigned n)
+{
+	return rte_ring_mp_enqueue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static int
+common_ring_sp_put(void *p, void * const *obj_table, unsigned n)
+{
+	return rte_ring_sp_enqueue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static int
+common_ring_mc_get(void *p, void **obj_table, unsigned n)
+{
+	return rte_ring_mc_dequeue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static int
+common_ring_sc_get(void *p, void **obj_table, unsigned n)
+{
+	return rte_ring_sc_dequeue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static unsigned
+common_ring_get_count(void *p)
+{
+	return rte_ring_count((struct rte_ring *)p);
+}
+
+
+static void *
+common_ring_alloc(struct rte_mempool *mp,
+		const char *name, unsigned n, int socket_id, unsigned flags)
+{
+	struct rte_ring *r;
+	char rg_name[RTE_RING_NAMESIZE];
+	int rg_flags = 0;
+
+	if (flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	/* allocate the ring that will be used to store objects */
+	/* Ring functions will return appropriate errors if we are
+	 * running as a secondary process etc., so no checks made
+	 * in this function for that condition */
+	snprintf(rg_name, sizeof(rg_name), "%s-ring", name);
+	r = rte_ring_create(rg_name, rte_align32pow2(n+1), socket_id, rg_flags);
+	if (r == NULL)
+		return NULL;
+
+	mp->pool = r;
+
+	return r;
+}
+
+static struct rte_mempool_handler handler_mp_mc = {
+	.name = "ring_mp_mc",
+	.alloc = common_ring_alloc,
+	.put = common_ring_mp_put,
+	.get = common_ring_mc_get,
+	.get_count = common_ring_get_count,
+	.free = NULL
+};
+static struct rte_mempool_handler handler_sp_sc = {
+	.name = "ring_sp_sc",
+	.alloc = common_ring_alloc,
+	.put = common_ring_sp_put,
+	.get = common_ring_sc_get,
+	.get_count = common_ring_get_count,
+	.free = NULL
+};
+static struct rte_mempool_handler handler_mp_sc = {
+	.name = "ring_mp_sc",
+	.alloc = common_ring_alloc,
+	.put = common_ring_mp_put,
+	.get = common_ring_sc_get,
+	.get_count = common_ring_get_count,
+	.free = NULL
+};
+static struct rte_mempool_handler handler_sp_mc = {
+	.name = "ring_sp_mc",
+	.alloc = common_ring_alloc,
+	.put = common_ring_sp_put,
+	.get = common_ring_mc_get,
+	.get_count = common_ring_get_count,
+	.free = NULL
+};
+
+REGISTER_MEMPOOL_HANDLER(handler_mp_mc);
+REGISTER_MEMPOOL_HANDLER(handler_sp_sc);
+REGISTER_MEMPOOL_HANDLER(handler_mp_sc);
+REGISTER_MEMPOOL_HANDLER(handler_sp_mc);
diff --git a/lib/librte_mempool/rte_mempool_handler.c b/lib/librte_mempool/rte_mempool_handler.c
new file mode 100644
index 0000000..f04f45f
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_handler.c
@@ -0,0 +1,140 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <string.h>
+
+#include "rte_mempool.h"
+#include "rte_mempool_handler.h"
+
+/*
+ * Indirect jump table to support external memory pools
+ */
+struct rte_mempool_handler_list mempool_handler_list = {
+	.sl =  RTE_SPINLOCK_INITIALIZER ,
+	.num_handlers = 0
+};
+
+/*
+ * Returns the name of the mempool
+ */
+char *
+rte_mempool_get_handler_name(struct rte_mempool *mp) {
+	return mempool_handler_list.handler[mp->handler_idx].name;
+}
+
+int16_t
+rte_mempool_register_handler(struct rte_mempool_handler *h)
+{
+	int16_t handler_idx;
+
+	/*  */
+	rte_spinlock_lock(&mempool_handler_list.sl);
+
+	/* Check whether jump table has space */
+	if (mempool_handler_list.num_handlers >= RTE_MEMPOOL_MAX_HANDLER_IDX) {
+		rte_spinlock_unlock(&mempool_handler_list.sl);
+		RTE_LOG(ERR, MEMPOOL,
+				"Maximum number of mempool handlers exceeded\n");
+		return -1;
+	}
+
+	if ((h->put == NULL) || (h->get == NULL) ||
+		(h->get_count == NULL)) {
+		rte_spinlock_unlock(&mempool_handler_list.sl);
+		 RTE_LOG(ERR, MEMPOOL,
+					"Missing callback while registering mempool handler\n");
+		return -1;
+	}
+
+	/* add new handler index */
+	handler_idx = mempool_handler_list.num_handlers++;
+
+	snprintf(mempool_handler_list.handler[handler_idx].name,
+				RTE_MEMPOOL_NAMESIZE, "%s", h->name);
+	mempool_handler_list.handler[handler_idx].alloc = h->alloc;
+	mempool_handler_list.handler[handler_idx].put = h->put;
+	mempool_handler_list.handler[handler_idx].get = h->get;
+	mempool_handler_list.handler[handler_idx].get_count = h->get_count;
+
+	rte_spinlock_unlock(&mempool_handler_list.sl);
+
+	return handler_idx;
+}
+
+int16_t
+rte_get_mempool_handler_idx(const char *name)
+{
+	int16_t i;
+
+	for (i = 0; i < mempool_handler_list.num_handlers; i++) {
+		if (!strcmp(name, mempool_handler_list.handler[i].name))
+			return i;
+	}
+	return -1;
+}
+
+void *
+rte_mempool_ext_alloc(struct rte_mempool *mp,
+		const char *name, unsigned n, int socket_id, unsigned flags)
+{
+	if (mempool_handler_list.handler[mp->handler_idx].alloc) {
+		return (mempool_handler_list.handler[mp->handler_idx].alloc)
+						(mp, name, n, socket_id, flags);
+	}
+	return NULL;
+}
+
+inline int __attribute__((always_inline))
+rte_mempool_ext_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	return (mempool_handler_list.handler[mp->handler_idx].get)
+						(mp->pool, obj_table, n);
+}
+
+inline int __attribute__((always_inline))
+rte_mempool_ext_put_bulk(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	return (mempool_handler_list.handler[mp->handler_idx].put)
+						(mp->pool, obj_table, n);
+}
+
+unsigned
+rte_mempool_ext_get_count(const struct rte_mempool *mp)
+{
+	return (mempool_handler_list.handler[mp->handler_idx].get_count)
+						(mp->pool);
+}
diff --git a/lib/librte_mempool/rte_mempool_handler.h b/lib/librte_mempool/rte_mempool_handler.h
new file mode 100644
index 0000000..982396f
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_handler.h
@@ -0,0 +1,75 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_MEMPOOL_INTERNAL_H_
+#define _RTE_MEMPOOL_INTERNAL_H_
+
+#include <rte_spinlock.h>
+#include <rte_mempool.h>
+
+#define RTE_MEMPOOL_MAX_HANDLER_IDX 16
+
+struct rte_mempool_handler {
+	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool handler */
+
+	rte_mempool_alloc_t alloc;
+
+	rte_mempool_get_count get_count;
+
+	rte_mempool_free_t free;
+
+	rte_mempool_put_t put;
+
+	rte_mempool_get_t get;
+} __rte_cache_aligned;
+
+struct rte_mempool_handler_list {
+	rte_spinlock_t sl;		  /**< Spinlock for add/delete. */
+
+	int32_t num_handlers;	  /**< Number of handlers that are valid. */
+
+	/* storage for all possible handlers */
+	struct rte_mempool_handler handler[RTE_MEMPOOL_MAX_HANDLER_IDX];
+};
+
+int16_t rte_mempool_register_handler(struct rte_mempool_handler *h);
+int16_t rte_get_mempool_handler_idx(const char *name);
+
+#define REGISTER_MEMPOOL_HANDLER(h) \
+static int16_t __attribute__((used)) testfn_##h(void);\
+int16_t __attribute__((constructor, used)) testfn_##h(void)\
+{\
+	return rte_mempool_register_handler(&h);\
+}
+
+#endif
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 17151e0..589db27 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -6,6 +6,7 @@ DPDK_2.0 {
 	rte_mempool_calc_obj_size;
 	rte_mempool_count;
 	rte_mempool_create;
+	rte_mempool_create_ext;
 	rte_mempool_dump;
 	rte_mempool_list_dump;
 	rte_mempool_lookup;
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v3 2/4] mempool: add custom mempool handler example
  2016-03-09  9:50   ` [PATCH v3 0/4] " David Hunt
  2016-03-09  9:50     ` [PATCH v3 1/4] mempool: add external mempool manager support David Hunt
@ 2016-03-09  9:50     ` David Hunt
  2016-03-09  9:50     ` [PATCH v3 3/4] mempool: allow rte_pktmbuf_pool_create switch between memool handlers David Hunt
                       ` (4 subsequent siblings)
  6 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-03-09  9:50 UTC (permalink / raw)
  To: dev

Add a custom mempool handler as part of an autotest:
ext_mempool_autotest as defined in test_ext_mempool.c

v3: now contains the mempool handler within the test file along
with it's get/put/get_count callbacks and self registration

Signed-off-by: David Hunt <david.hunt@intel.com>
---
 app/test/Makefile           |   1 +
 app/test/test_ext_mempool.c | 451 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 452 insertions(+)
 create mode 100644 app/test/test_ext_mempool.c

diff --git a/app/test/Makefile b/app/test/Makefile
index ec33e1a..9a2f75f 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -74,6 +74,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
 SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_racecond.c
 
 SRCS-y += test_mempool.c
+SRCS-y += test_ext_mempool.c
 SRCS-y += test_mempool_perf.c
 
 SRCS-y += test_mbuf.c
diff --git a/app/test/test_ext_mempool.c b/app/test/test_ext_mempool.c
new file mode 100644
index 0000000..6beada0
--- /dev/null
+++ b/app/test/test_ext_mempool.c
@@ -0,0 +1,451 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <sys/queue.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_launch.h>
+#include <rte_cycles.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_spinlock.h>
+#include <rte_malloc.h>
+
+#include "test.h"
+
+/*
+ * Mempool
+ * =======
+ *
+ * Basic tests: done on one core with and without cache:
+ *
+ *    - Get one object, put one object
+ *    - Get two objects, put two objects
+ *    - Get all objects, test that their content is not modified and
+ *      put them back in the pool.
+ */
+
+#define TIME_S 5
+#define MEMPOOL_ELT_SIZE 2048
+#define MAX_KEEP 128
+#define MEMPOOL_SIZE 8192
+
+static struct rte_mempool *mp;
+static struct rte_mempool *ext_nocache, *ext_cache;
+
+static rte_atomic32_t synchro;
+
+/*
+ * For our tests, we use the following struct to pass info to our create
+ *  callback so it can call rte_mempool_create
+ */
+struct custom_mempool_alloc_params {
+	char ring_name[RTE_RING_NAMESIZE];
+	unsigned n_elt;
+	unsigned elt_size;
+};
+
+/*
+ * Simple example of custom mempool structure. Holds pointers to all the
+ * elements which are simply malloc'd in this example.
+ */
+struct custom_mempool {
+	struct rte_ring *r;            /* Ring to manage elements */
+	void *elements[MEMPOOL_SIZE];  /* Element pointers */
+};
+
+/*
+ * save the object number in the first 4 bytes of object data. All
+ * other bytes are set to 0.
+ */
+static void
+my_obj_init(struct rte_mempool *mp, __attribute__((unused)) void *arg,
+		void *obj, unsigned i)
+{
+	uint32_t *objnum = obj;
+
+	memset(obj, 0, mp->elt_size);
+	*objnum = i;
+	printf("Setting objnum to %d\n", i);
+}
+
+/* basic tests (done on one core) */
+static int
+test_mempool_basic(void)
+{
+	uint32_t *objnum;
+	void **objtable;
+	void *obj, *obj2;
+	char *obj_data;
+	int ret = 0;
+	unsigned i, j;
+
+	/* dump the mempool status */
+	rte_mempool_dump(stdout, mp);
+
+	printf("Count = %d\n", rte_mempool_count(mp));
+	printf("get an object\n");
+	if (rte_mempool_get(mp, &obj) < 0) {
+		printf("get Failed\n");
+		return -1;
+	}
+	printf("Count = %d\n", rte_mempool_count(mp));
+	rte_mempool_dump(stdout, mp);
+
+	/* tests that improve coverage */
+	printf("get object count\n");
+	if (rte_mempool_count(mp) != MEMPOOL_SIZE - 1)
+		return -1;
+
+	printf("get private data\n");
+	if (rte_mempool_get_priv(mp) !=
+			(char *) mp + MEMPOOL_HEADER_SIZE(mp, mp->pg_num))
+		return -1;
+
+	printf("get physical address of an object\n");
+	if (MEMPOOL_IS_CONTIG(mp) &&
+			rte_mempool_virt2phy(mp, obj) !=
+			(phys_addr_t) (mp->phys_addr +
+			(phys_addr_t) ((char *) obj - (char *) mp)))
+		return -1;
+
+	printf("put the object back\n");
+	rte_mempool_put(mp, obj);
+	rte_mempool_dump(stdout, mp);
+
+	printf("get 2 objects\n");
+	if (rte_mempool_get(mp, &obj) < 0)
+		return -1;
+	if (rte_mempool_get(mp, &obj2) < 0) {
+		rte_mempool_put(mp, obj);
+		return -1;
+	}
+	rte_mempool_dump(stdout, mp);
+
+	printf("put the objects back\n");
+	rte_mempool_put(mp, obj);
+	rte_mempool_put(mp, obj2);
+	rte_mempool_dump(stdout, mp);
+
+	/*
+	 * get many objects: we cannot get them all because the cache
+	 * on other cores may not be empty.
+	 */
+	objtable = malloc(MEMPOOL_SIZE * sizeof(void *));
+	if (objtable == NULL)
+		return -1;
+
+	for (i = 0; i < MEMPOOL_SIZE; i++) {
+		if (rte_mempool_get(mp, &objtable[i]) < 0)
+			break;
+	}
+
+	/*
+	 * for each object, check that its content was not modified,
+	 * and put objects back in pool
+	 */
+	while (i--) {
+		obj = objtable[i];
+		obj_data = obj;
+		objnum = obj;
+		if (*objnum > MEMPOOL_SIZE) {
+			printf("bad object number(%d)\n", *objnum);
+			ret = -1;
+			break;
+		}
+		for (j = sizeof(*objnum); j < mp->elt_size; j++) {
+			if (obj_data[j] != 0)
+				ret = -1;
+		}
+
+		rte_mempool_put(mp, objtable[i]);
+	}
+
+	free(objtable);
+	if (ret == -1)
+		printf("objects were modified!\n");
+
+	return ret;
+}
+
+static int test_mempool_creation_with_exceeded_cache_size(void)
+{
+	struct rte_mempool *mp_cov;
+
+	mp_cov = rte_mempool_create("test_mempool_creation_exceeded_cache_size",
+						MEMPOOL_SIZE,
+						MEMPOOL_ELT_SIZE,
+						RTE_MEMPOOL_CACHE_MAX_SIZE + 32,
+						0,
+						NULL, NULL,
+						my_obj_init, NULL,
+						SOCKET_ID_ANY, 0);
+	if (NULL != mp_cov)
+		return -1;
+
+	return 0;
+}
+
+/*
+ * it tests some more basic of mempool
+ */
+static int
+test_mempool_basic_ex(struct rte_mempool *mp)
+{
+	unsigned i;
+	void **obj;
+	void *err_obj;
+	int ret = -1;
+
+	if (mp == NULL)
+		return ret;
+
+	obj = rte_calloc("test_mempool_basic_ex",
+					MEMPOOL_SIZE , sizeof(void *), 0);
+	if (obj == NULL) {
+		printf("test_mempool_basic_ex fail to rte_malloc\n");
+		return ret;
+	}
+	printf("test_mempool_basic_ex now mempool (%s) has %u free entries\n",
+					mp->name, rte_mempool_free_count(mp));
+	if (rte_mempool_full(mp) != 1) {
+		printf("test_mempool_basic_ex the mempool should be full\n");
+		goto fail_mp_basic_ex;
+	}
+
+	for (i = 0; i < MEMPOOL_SIZE; i++) {
+		if (rte_mempool_mc_get(mp, &obj[i]) < 0) {
+			printf("test_mp_basic_ex fail to get object for [%u]\n",
+					i);
+			goto fail_mp_basic_ex;
+		}
+	}
+	if (rte_mempool_mc_get(mp, &err_obj) == 0) {
+		printf("test_mempool_basic_ex get an impossible obj\n");
+		goto fail_mp_basic_ex;
+	}
+	printf("number: %u\n", i);
+	if (rte_mempool_empty(mp) != 1) {
+		printf("test_mempool_basic_ex the mempool should be empty\n");
+		goto fail_mp_basic_ex;
+	}
+
+	for (i = 0; i < MEMPOOL_SIZE; i++)
+		rte_mempool_mp_put(mp, obj[i]);
+
+	if (rte_mempool_full(mp) != 1) {
+		printf("test_mempool_basic_ex the mempool should be full\n");
+		goto fail_mp_basic_ex;
+	}
+
+	ret = 0;
+
+fail_mp_basic_ex:
+	if (obj != NULL)
+		rte_free((void *)obj);
+
+	return ret;
+}
+
+static int
+test_mempool_same_name_twice_creation(void)
+{
+	struct rte_mempool *mp_tc;
+
+	mp_tc = rte_mempool_create("test_mempool_same_name_twice_creation",
+						MEMPOOL_SIZE,
+						MEMPOOL_ELT_SIZE, 0, 0,
+						NULL, NULL,
+						NULL, NULL,
+						SOCKET_ID_ANY, 0);
+	if (NULL == mp_tc)
+		return -1;
+
+	mp_tc = rte_mempool_create("test_mempool_same_name_twice_creation",
+						MEMPOOL_SIZE,
+						MEMPOOL_ELT_SIZE, 0, 0,
+						NULL, NULL,
+						NULL, NULL,
+						SOCKET_ID_ANY, 0);
+	if (NULL != mp_tc)
+		return -1;
+
+	return 0;
+}
+
+/*
+ * BAsic test for mempool_xmem functions.
+ */
+static int
+test_mempool_xmem_misc(void)
+{
+	uint32_t elt_num, total_size;
+	size_t sz;
+	ssize_t usz;
+
+	elt_num = MAX_KEEP;
+	total_size = rte_mempool_calc_obj_size(MEMPOOL_ELT_SIZE, 0, NULL);
+	sz = rte_mempool_xmem_size(elt_num, total_size, MEMPOOL_PG_SHIFT_MAX);
+
+	usz = rte_mempool_xmem_usage(NULL, elt_num, total_size, 0, 1,
+		MEMPOOL_PG_SHIFT_MAX);
+
+	if (sz != (size_t)usz)  {
+		printf("failure @ %s: rte_mempool_xmem_usage(%u, %u) "
+			"returns: %#zx, while expected: %#zx;\n",
+			__func__, elt_num, total_size, sz, (size_t)usz);
+		return (-1);
+	}
+
+	return 0;
+}
+
+
+
+static int
+test_ext_mempool(void)
+{
+	rte_atomic32_init(&synchro);
+
+	/* create an external mempool (without cache) */
+	if (ext_nocache == NULL)
+		ext_nocache = rte_mempool_create_ext(
+				"ext_nocache",         /* Name */
+				MEMPOOL_SIZE,          /* Number of Elements */
+				MEMPOOL_ELT_SIZE,      /* Element size */
+				0,                     /* Cache Size */
+				0,                     /* Private Data size */
+				NULL, NULL, NULL, NULL,
+				0,                     /* socket_id */
+				0,                     /* flags */
+				"custom_handler"
+				);
+	if (ext_nocache == NULL)
+		return -1;
+
+	/* create an external mempool (with cache) */
+	if (ext_cache == NULL)
+		ext_cache = rte_mempool_create_ext(
+				"ext_cache",           /* Name */
+				MEMPOOL_SIZE,          /* Number of Elements */
+				MEMPOOL_ELT_SIZE,      /* Element size */
+				16,                    /* Cache Size */
+				0,                     /* Private Data size */
+				NULL, NULL, NULL, NULL,
+				0,                     /* socket_id */
+				0,                     /* flags */
+				"custom_handler"
+				);
+	if (ext_cache == NULL)
+		return -1;
+
+	if (rte_mempool_get_handler_name(ext_nocache)) {
+		printf("Handler name is \"%s\"\n",
+			rte_mempool_get_handler_name(ext_nocache));
+	} else {
+		printf("Cannot lookup mempool handler name\n");
+		return -1;
+	}
+
+	if (rte_mempool_get_handler_name(ext_cache))
+		printf("Handler name is \"%s\"\n",
+			rte_mempool_get_handler_name(ext_cache));
+	else {
+		printf("Cannot lookup mempool handler name\n");
+		return -1;
+	}
+
+	/* retrieve the mempool from its name */
+	if (rte_mempool_lookup("ext_nocache") != ext_nocache) {
+		printf("Cannot lookup mempool from its name\n");
+		return -1;
+	}
+	/* retrieve the mempool from its name */
+	if (rte_mempool_lookup("ext_cache") != ext_cache) {
+		printf("Cannot lookup mempool from its name\n");
+		return -1;
+	}
+
+	rte_mempool_list_dump(stdout);
+
+	printf("Running basic tests\n");
+	/* basic tests without cache */
+	mp = ext_nocache;
+	if (test_mempool_basic() < 0)
+		return -1;
+
+	/* basic tests with cache */
+	mp = ext_cache;
+	if (test_mempool_basic() < 0)
+		return -1;
+
+	/* more basic tests without cache */
+	if (test_mempool_basic_ex(ext_nocache) < 0)
+		return -1;
+
+	if (test_mempool_creation_with_exceeded_cache_size() < 0)
+		return -1;
+
+	if (test_mempool_same_name_twice_creation() < 0)
+		return -1;
+	return 0;
+
+	if (test_mempool_xmem_misc() < 0)
+		return -1;
+
+	rte_mempool_list_dump(stdout);
+
+	return 0;
+}
+
+static struct test_command mempool_cmd = {
+	.command = "ext_mempool_autotest",
+	.callback = test_ext_mempool,
+};
+REGISTER_TEST_COMMAND(mempool_cmd);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v3 3/4] mempool: allow rte_pktmbuf_pool_create switch between memool handlers
  2016-03-09  9:50   ` [PATCH v3 0/4] " David Hunt
  2016-03-09  9:50     ` [PATCH v3 1/4] mempool: add external mempool manager support David Hunt
  2016-03-09  9:50     ` [PATCH v3 2/4] mempool: add custom mempool handler example David Hunt
@ 2016-03-09  9:50     ` David Hunt
  2016-03-09 10:54       ` Panu Matilainen
  2016-03-09  9:50     ` [PATCH v3 4/4] mempool: add in the RTE_NEXT_ABI for ABI breakages David Hunt
                       ` (3 subsequent siblings)
  6 siblings, 1 reply; 238+ messages in thread
From: David Hunt @ 2016-03-09  9:50 UTC (permalink / raw)
  To: dev

If the user wants to have rte_pktmbuf_pool_create() use an external mempool
handler, they define RTE_MEMPOOL_HANDLER_NAME to be the name of the
mempool handler they wish to use, and change RTE_MEMPOOL_HANDLER_EXT to 'y'

Signed-off-by: David Hunt <david.hunt@intel.com>
---
 config/common_base         | 2 ++
 lib/librte_mbuf/rte_mbuf.c | 8 ++++++++
 2 files changed, 10 insertions(+)

diff --git a/config/common_base b/config/common_base
index 1af28c8..9d70cf4 100644
--- a/config/common_base
+++ b/config/common_base
@@ -350,6 +350,8 @@ CONFIG_RTE_RING_PAUSE_REP_COUNT=0
 CONFIG_RTE_LIBRTE_MEMPOOL=y
 CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE=512
 CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
+CONFIG_RTE_MEMPOOL_HANDLER_EXT=n
+CONFIG_RTE_MEMPOOL_HANDLER_NAME="custom_handler"
 
 #
 # Compile librte_mbuf
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index c18b438..42b0cd1 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -167,10 +167,18 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	mbp_priv.mbuf_data_room_size = data_room_size;
 	mbp_priv.mbuf_priv_size = priv_size;
 
+#ifdef RTE_MEMPOOL_HANDLER_EXT
+	return rte_mempool_create_ext(name, n, elt_size,
+		cache_size, sizeof(struct rte_pktmbuf_pool_private),
+		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
+		socket_id, 0,
+		RTE_MEMPOOL_HANDLER_NAME);
+#else
 	return rte_mempool_create(name, n, elt_size,
 		cache_size, sizeof(struct rte_pktmbuf_pool_private),
 		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
 		socket_id, 0);
+#endif
 }
 
 /* do some sanity checks on a mbuf: panic if it fails */
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v3 4/4] mempool: add in the RTE_NEXT_ABI for ABI breakages
  2016-03-09  9:50   ` [PATCH v3 0/4] " David Hunt
                       ` (2 preceding siblings ...)
  2016-03-09  9:50     ` [PATCH v3 3/4] mempool: allow rte_pktmbuf_pool_create switch between memool handlers David Hunt
@ 2016-03-09  9:50     ` David Hunt
  2016-03-09 10:46       ` Panu Matilainen
  2016-03-09 11:10     ` [PATCH v3 0/4] external mempool manager Hunt, David
                       ` (2 subsequent siblings)
  6 siblings, 1 reply; 238+ messages in thread
From: David Hunt @ 2016-03-09  9:50 UTC (permalink / raw)
  To: dev

This patch is for those people who want to be easily able to switch
between the new mempool layout and the old. Change the value of
RTE_NEXT_ABI in common_base config file

v3: Updated to take re-work of file layouts into consideration

v2: Kept all the NEXT_ABI defs to this patch so as to make the
previous patches easier to read, and also to imake it clear what
code is necessary to keep ABI compatibility when NEXT_ABI is
disabled.

Signed-off-by: David Hunt <david.hunt@intel.com>
---
 app/test/Makefile                |   2 +
 app/test/test_mempool_perf.c     |   3 +
 lib/librte_mbuf/rte_mbuf.c       |   7 ++
 lib/librte_mempool/Makefile      |   2 +
 lib/librte_mempool/rte_mempool.c | 245 ++++++++++++++++++++++++++++++++++++++-
 lib/librte_mempool/rte_mempool.h |  59 +++++++++-
 6 files changed, 315 insertions(+), 3 deletions(-)

diff --git a/app/test/Makefile b/app/test/Makefile
index 9a2f75f..8fcf0c2 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -74,7 +74,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
 SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_racecond.c
 
 SRCS-y += test_mempool.c
+ifeq ($(CONFIG_RTE_NEXT_ABI),y)
 SRCS-y += test_ext_mempool.c
+endif
 SRCS-y += test_mempool_perf.c
 
 SRCS-y += test_mbuf.c
diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index 091c1df..ca69e49 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -161,6 +161,9 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
 							   n_get_bulk);
 				if (unlikely(ret < 0)) {
 					rte_mempool_dump(stdout, mp);
+#ifndef RTE_NEXT_ABI
+					rte_ring_dump(stdout, mp->ring);
+#endif
 					/* in this case, objects are lost... */
 					return -1;
 				}
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 42b0cd1..967d987 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -167,6 +167,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	mbp_priv.mbuf_data_room_size = data_room_size;
 	mbp_priv.mbuf_priv_size = priv_size;
 
+#ifdef RTE_NEXT_ABI
 #ifdef RTE_MEMPOOL_HANDLER_EXT
 	return rte_mempool_create_ext(name, n, elt_size,
 		cache_size, sizeof(struct rte_pktmbuf_pool_private),
@@ -179,6 +180,12 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
 		socket_id, 0);
 #endif
+#else
+	return rte_mempool_create(name, n, elt_size,
+		cache_size, sizeof(struct rte_pktmbuf_pool_private),
+		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
+		socket_id, 0);
+#endif
 }
 
 /* do some sanity checks on a mbuf: panic if it fails */
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index a32c89e..a27eef9 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -42,8 +42,10 @@ LIBABIVER := 1
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
+ifeq ($(CONFIG_RTE_NEXT_ABI),y)
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_handler.c
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
+endif
 
 ifeq ($(CONFIG_RTE_LIBRTE_XEN_DOM0),y)
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_dom0_mempool.c
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 7342a7f..e77ef47 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -59,7 +59,10 @@
 #include <rte_spinlock.h>
 
 #include "rte_mempool.h"
+#ifdef RTE_NEXT_ABI
 #include "rte_mempool_handler.h"
+#endif
+
 
 TAILQ_HEAD(rte_mempool_list, rte_tailq_entry);
 
@@ -150,7 +153,11 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, uint32_t obj_idx,
 		obj_init(mp, obj_init_arg, obj, obj_idx);
 
 	/* enqueue in ring */
+#ifdef RTE_NEXT_ABI
 	rte_mempool_ext_put_bulk(mp, &obj, 1);
+#else
+	rte_ring_mp_enqueue_bulk(mp->ring, &obj, 1);
+#endif
 }
 
 uint32_t
@@ -420,6 +427,7 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
 					       MEMPOOL_PG_SHIFT_MAX);
 }
 
+#ifdef RTE_NEXT_ABI
 /*
  * Common mempool create function.
  * Create the mempool over already allocated chunk of memory.
@@ -711,6 +719,229 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 
 	return mp;
 }
+#else
+/*
+ * Create the mempool over already allocated chunk of memory.
+ * That external memory buffer can consists of physically disjoint pages.
+ * Setting vaddr to NULL, makes mempool to fallback to original behaviour
+ * and allocate space for mempool and it's elements as one big chunk of
+ * physically continuos memory.
+ * */
+struct rte_mempool *
+rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
+		unsigned cache_size, unsigned private_data_size,
+		rte_mempool_ctor_t *mp_init, void *mp_init_arg,
+		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
+		int socket_id, unsigned flags, void *vaddr,
+		const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift)
+{
+	char mz_name[RTE_MEMZONE_NAMESIZE];
+	char rg_name[RTE_RING_NAMESIZE];
+	struct rte_mempool_list *mempool_list;
+	struct rte_mempool *mp = NULL;
+	struct rte_tailq_entry *te;
+	struct rte_ring *r;
+	const struct rte_memzone *mz;
+	size_t mempool_size;
+	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
+	int rg_flags = 0;
+	void *obj;
+	struct rte_mempool_objsz objsz;
+	void *startaddr;
+	int page_size = getpagesize();
+
+	/* compilation-time checks */
+	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool) &
+			  RTE_CACHE_LINE_MASK) != 0);
+#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
+	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_cache) &
+			  RTE_CACHE_LINE_MASK) != 0);
+	RTE_BUILD_BUG_ON((offsetof(struct rte_mempool, local_cache) &
+			  RTE_CACHE_LINE_MASK) != 0);
+#endif
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_debug_stats) &
+			  RTE_CACHE_LINE_MASK) != 0);
+	RTE_BUILD_BUG_ON((offsetof(struct rte_mempool, stats) &
+			  RTE_CACHE_LINE_MASK) != 0);
+#endif
+
+	mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list);
+
+	/* asked cache too big */
+	if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE ||
+	    CALC_CACHE_FLUSHTHRESH(cache_size) > n) {
+		rte_errno = EINVAL;
+		return NULL;
+	}
+
+	/* check that we have both VA and PA */
+	if (vaddr != NULL && paddr == NULL) {
+		rte_errno = EINVAL;
+		return NULL;
+	}
+
+	/* Check that pg_num and pg_shift parameters are valid. */
+	if (pg_num < RTE_DIM(mp->elt_pa) || pg_shift > MEMPOOL_PG_SHIFT_MAX) {
+		rte_errno = EINVAL;
+		return NULL;
+	}
+
+	/* "no cache align" imply "no spread" */
+	if (flags & MEMPOOL_F_NO_CACHE_ALIGN)
+		flags |= MEMPOOL_F_NO_SPREAD;
+
+	/* ring flags */
+	if (flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	/* calculate mempool object sizes. */
+	if (!rte_mempool_calc_obj_size(elt_size, flags, &objsz)) {
+		rte_errno = EINVAL;
+		return NULL;
+	}
+
+	rte_rwlock_write_lock(RTE_EAL_MEMPOOL_RWLOCK);
+
+	/* allocate the ring that will be used to store objects */
+	/* Ring functions will return appropriate errors if we are
+	 * running as a secondary process etc., so no checks made
+	 * in this function for that condition */
+	snprintf(rg_name, sizeof(rg_name), RTE_MEMPOOL_MZ_FORMAT, name);
+	r = rte_ring_create(rg_name, rte_align32pow2(n+1), socket_id, rg_flags);
+	if (r == NULL)
+		goto exit;
+
+	/*
+	 * reserve a memory zone for this mempool: private data is
+	 * cache-aligned
+	 */
+	private_data_size = (private_data_size +
+		RTE_MEMPOOL_ALIGN_MASK) & (~RTE_MEMPOOL_ALIGN_MASK);
+
+	if (!rte_eal_has_hugepages()) {
+		/*
+		 * expand private data size to a whole page, so that the
+		 * first pool element will start on a new standard page
+		 */
+		int head = sizeof(struct rte_mempool);
+		int new_size = (private_data_size + head) % page_size;
+
+		if (new_size)
+			private_data_size += page_size - new_size;
+	}
+
+	/* try to allocate tailq entry */
+	te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0);
+	if (te == NULL) {
+		RTE_LOG(ERR, MEMPOOL, "Cannot allocate tailq entry!\n");
+		goto exit;
+	}
+
+	/*
+	 * If user provided an external memory buffer, then use it to
+	 * store mempool objects. Otherwise reserve a memzone that is large
+	 * enough to hold mempool header and metadata plus mempool objects.
+	 */
+	mempool_size = MEMPOOL_HEADER_SIZE(mp, pg_num) + private_data_size;
+	mempool_size = RTE_ALIGN_CEIL(mempool_size, RTE_MEMPOOL_ALIGN);
+	if (vaddr == NULL)
+		mempool_size += (size_t)objsz.total_size * n;
+
+	if (!rte_eal_has_hugepages()) {
+		/*
+		 * we want the memory pool to start on a page boundary,
+		 * because pool elements crossing page boundaries would
+		 * result in discontiguous physical addresses
+		 */
+		mempool_size += page_size;
+	}
+
+	snprintf(mz_name, sizeof(mz_name), RTE_MEMPOOL_MZ_FORMAT, name);
+
+	mz = rte_memzone_reserve(mz_name, mempool_size, socket_id, mz_flags);
+
+	/*
+	 * no more memory: in this case we loose previously reserved
+	 * space for the ring as we cannot free it
+	 */
+	if (mz == NULL) {
+		rte_free(te);
+		goto exit;
+	}
+
+	if (rte_eal_has_hugepages()) {
+		startaddr = (void *)mz->addr;
+	} else {
+		/* align memory pool start address on a page boundary */
+		unsigned long addr = (unsigned long)mz->addr;
+
+		if (addr & (page_size - 1)) {
+			addr += page_size;
+			addr &= ~(page_size - 1);
+		}
+		startaddr = (void *)addr;
+	}
+
+	/* init the mempool structure */
+	mp = startaddr;
+	memset(mp, 0, sizeof(*mp));
+	snprintf(mp->name, sizeof(mp->name), "%s", name);
+	mp->phys_addr = mz->phys_addr;
+	mp->ring = r;
+	mp->size = n;
+	mp->flags = flags;
+	mp->elt_size = objsz.elt_size;
+	mp->header_size = objsz.header_size;
+	mp->trailer_size = objsz.trailer_size;
+	mp->cache_size = cache_size;
+	mp->cache_flushthresh = CALC_CACHE_FLUSHTHRESH(cache_size);
+	mp->private_data_size = private_data_size;
+
+	/* calculate address of the first element for continuous mempool. */
+	obj = (char *)mp + MEMPOOL_HEADER_SIZE(mp, pg_num) +
+		private_data_size;
+	obj = RTE_PTR_ALIGN_CEIL(obj, RTE_MEMPOOL_ALIGN);
+
+	/* populate address translation fields. */
+	mp->pg_num = pg_num;
+	mp->pg_shift = pg_shift;
+	mp->pg_mask = RTE_LEN2MASK(mp->pg_shift, typeof(mp->pg_mask));
+
+	/* mempool elements allocated together with mempool */
+	if (vaddr == NULL) {
+		mp->elt_va_start = (uintptr_t)obj;
+		mp->elt_pa[0] = mp->phys_addr +
+			(mp->elt_va_start - (uintptr_t)mp);
+
+	/* mempool elements in a separate chunk of memory. */
+	} else {
+		mp->elt_va_start = (uintptr_t)vaddr;
+		memcpy(mp->elt_pa, paddr, sizeof(mp->elt_pa[0]) * pg_num);
+	}
+
+	mp->elt_va_end = mp->elt_va_start;
+
+	/* call the initializer */
+	if (mp_init)
+		mp_init(mp, mp_init_arg);
+
+	mempool_populate(mp, n, 1, obj_init, obj_init_arg);
+
+	te->data = (void *) mp;
+
+	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+	TAILQ_INSERT_TAIL(mempool_list, te, next);
+	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+exit:
+	rte_rwlock_write_unlock(RTE_EAL_MEMPOOL_RWLOCK);
+
+	return mp;
+}
+#endif
 
 /* Return the number of entries in the mempool */
 unsigned
@@ -718,7 +949,11 @@ rte_mempool_count(const struct rte_mempool *mp)
 {
 	unsigned count;
 
+#ifdef RTE_NEXT_ABI
 	count = rte_mempool_ext_get_count(mp);
+#else
+	count = rte_ring_count(mp->ring);
+#endif
 
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
 	{
@@ -874,6 +1109,9 @@ rte_mempool_dump(FILE *f, const struct rte_mempool *mp)
 
 	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
 	fprintf(f, "  flags=%x\n", mp->flags);
+#ifndef RTE_NEXT_ABI
+	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
+#endif
 	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->phys_addr);
 	fprintf(f, "  size=%"PRIu32"\n", mp->size);
 	fprintf(f, "  header_size=%"PRIu32"\n", mp->header_size);
@@ -896,7 +1134,11 @@ rte_mempool_dump(FILE *f, const struct rte_mempool *mp)
 			mp->size);
 
 	cache_count = rte_mempool_dump_cache(f, mp);
+#ifdef RTE_NEXT_ABI
 	common_count = rte_mempool_ext_get_count(mp);
+#else
+	common_count = rte_ring_count(mp->ring);
+#endif
 	if ((cache_count + common_count) > mp->size)
 		common_count = mp->size - cache_count;
 	fprintf(f, "  common_pool_count=%u\n", common_count);
@@ -991,7 +1233,7 @@ void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *),
 	rte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK);
 }
 
-
+#ifdef RTE_NEXT_ABI
 /* create the mempool using an external mempool manager */
 struct rte_mempool *
 rte_mempool_create_ext(const char *name, unsigned n, unsigned elt_size,
@@ -1017,3 +1259,4 @@ rte_mempool_create_ext(const char *name, unsigned n, unsigned elt_size,
 
 
 }
+#endif
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index f987d8a..4b14b80 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -175,6 +175,7 @@ struct rte_mempool_objtlr {
 #endif
 };
 
+#ifdef RTE_NEXT_ABI
 /* Handler functions for external mempool support */
 typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp,
 		const char *name, unsigned n, int socket_id, unsigned flags);
@@ -256,12 +257,16 @@ rte_mempool_ext_get_count(const struct rte_mempool *mp);
  */
 int
 rte_mempool_ext_free(struct rte_mempool *mp);
+#endif
 
 /**
  * The RTE mempool structure.
  */
 struct rte_mempool {
 	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
+#ifndef RTE_NEXT_ABI
+	struct rte_ring *ring;           /**< Ring to store objects. */
+#endif
 	phys_addr_t phys_addr;           /**< Phys. addr. of mempool struct. */
 	int flags;                       /**< Flags of the mempool. */
 	uint32_t size;                   /**< Size of the mempool. */
@@ -275,6 +280,7 @@ struct rte_mempool {
 
 	unsigned private_data_size;      /**< Size of private data. */
 
+#ifdef RTE_NEXT_ABI
 	/* Common pool data structure pointer */
 	void *pool;
 
@@ -286,6 +292,7 @@ struct rte_mempool {
 	 * directly would not be valid for secondary processes.
 	 */
 	int16_t handler_idx;
+#endif
 
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
 	/** Per-lcore local cache. */
@@ -316,8 +323,9 @@ struct rte_mempool {
 #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
 #define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
+#ifdef RTE_NEXT_ABI
 #define MEMPOOL_F_INT_HANDLER    0x0020 /**< Using internal mempool handler */
-
+#endif
 
 /**
  * @internal When debug is enabled, store some statistics.
@@ -847,7 +855,12 @@ void rte_mempool_dump(FILE *f, const struct rte_mempool *mp);
  */
 static inline void __attribute__((always_inline))
 __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
-		    unsigned n, __rte_unused int is_mp)
+#ifdef RTE_NEXT_ABI
+		unsigned n, __rte_unused int is_mp)
+#else
+		unsigned n, int is_mp)
+#endif
+
 {
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
 	struct rte_mempool_cache *cache;
@@ -887,9 +900,15 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 
 	cache->len += n;
 
+#ifdef RTE_NEXT_ABI
 	if (unlikely(cache->len >= flushthresh)) {
 		rte_mempool_ext_put_bulk(mp, &cache->objs[cache_size],
 				cache->len - cache_size);
+#else
+	if (cache->len >= flushthresh) {
+		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
+				cache->len - cache_size);
+#endif
 		cache->len = cache_size;
 		/*
 		 * Increment stats counter to tell us how many pool puts
@@ -903,10 +922,28 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 ring_enqueue:
 #endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
 
+#ifdef RTE_NEXT_ABI
 	/* Increment stats counter to tell us how many pool puts happened */
 	__MEMPOOL_STAT_ADD(mp, put_pool, n);
 
 	rte_mempool_ext_put_bulk(mp, obj_table, n);
+#else
+	/* push remaining objects in ring */
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	if (is_mp) {
+		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
+			rte_panic("cannot put objects in mempool\n");
+	} else {
+		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
+			rte_panic("cannot put objects in mempool\n");
+	}
+#else
+	if (is_mp)
+		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
+	else
+		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
+#endif
+#endif
 }
 
 
@@ -1030,7 +1067,11 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
  */
 static inline int __attribute__((always_inline))
 __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
+#ifdef RTE_NEXT_ABI
 		   unsigned n, __attribute__((unused))int is_mc)
+#else
+		   unsigned n, int is_mc)
+#endif
 {
 	int ret;
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
@@ -1054,8 +1095,13 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 		uint32_t req = n + (cache_size - cache->len);
 
 		/* How many do we require i.e. number to fill the cache + the request */
+#ifdef RTE_NEXT_ABI
 		ret = rte_mempool_ext_get_bulk(mp,
 			&cache->objs[cache->len], req);
+#else
+		ret = rte_ring_mc_dequeue_bulk(mp->ring,
+			&cache->objs[cache->len], req);
+#endif
 		if (unlikely(ret < 0)) {
 			/*
 			 * In the offchance that we are buffer constrained,
@@ -1083,7 +1129,14 @@ ring_dequeue:
 #endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
 
 	/* get remaining objects from ring */
+#ifdef RTE_NEXT_ABI
 	ret = rte_mempool_ext_get_bulk(mp, obj_table, n);
+#else
+	if (is_mc)
+		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
+	else
+		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
+#endif
 
 	if (ret < 0)
 		__MEMPOOL_STAT_ADD(mp, get_fail, n);
@@ -1485,6 +1538,7 @@ ssize_t rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz,
  */
 void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *arg),
 		      void *arg);
+#ifdef RTE_NEXT_ABI
 
 /**
  * Function to get the name of a mempool handler
@@ -1559,6 +1613,7 @@ rte_mempool_create_ext(const char *name, unsigned n, unsigned elt_size,
 		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
 		int socket_id, unsigned flags,
 		const char *handler_name);
+#endif
 
 #ifdef __cplusplus
 }
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* Re: [PATCH v3 4/4] mempool: add in the RTE_NEXT_ABI for ABI breakages
  2016-03-09  9:50     ` [PATCH v3 4/4] mempool: add in the RTE_NEXT_ABI for ABI breakages David Hunt
@ 2016-03-09 10:46       ` Panu Matilainen
  2016-03-09 11:30         ` Hunt, David
  0 siblings, 1 reply; 238+ messages in thread
From: Panu Matilainen @ 2016-03-09 10:46 UTC (permalink / raw)
  To: David Hunt, dev

On 03/09/2016 11:50 AM, David Hunt wrote:
> This patch is for those people who want to be easily able to switch
> between the new mempool layout and the old. Change the value of
> RTE_NEXT_ABI in common_base config file

I guess the idea here is to document how to switch between the ABIs but 
to me this reads as if this patch is supposed to change the value in 
common_base. Of course there's  no such change included (nor should 
there be) here, but the description could use some fine-tuning perhaps.

>
> v3: Updated to take re-work of file layouts into consideration
>
> v2: Kept all the NEXT_ABI defs to this patch so as to make the
> previous patches easier to read, and also to imake it clear what
> code is necessary to keep ABI compatibility when NEXT_ABI is
> disabled.

Maybe its just me, but:
I can see why NEXT_ABI is in a separate patch for review purposes but 
for final commit this split doesn't seem right to me. In any case its 
quite a large change for NEXT_ABI.

In any case, you should add a deprecation notice for the oncoming ABI 
break in 16.07.

	- Panu -

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v3 3/4] mempool: allow rte_pktmbuf_pool_create switch between memool handlers
  2016-03-09  9:50     ` [PATCH v3 3/4] mempool: allow rte_pktmbuf_pool_create switch between memool handlers David Hunt
@ 2016-03-09 10:54       ` Panu Matilainen
  2016-03-09 11:38         ` Hunt, David
  0 siblings, 1 reply; 238+ messages in thread
From: Panu Matilainen @ 2016-03-09 10:54 UTC (permalink / raw)
  To: David Hunt, dev

On 03/09/2016 11:50 AM, David Hunt wrote:
> If the user wants to have rte_pktmbuf_pool_create() use an external mempool
> handler, they define RTE_MEMPOOL_HANDLER_NAME to be the name of the
> mempool handler they wish to use, and change RTE_MEMPOOL_HANDLER_EXT to 'y'
>
> Signed-off-by: David Hunt <david.hunt@intel.com>
> ---
>   config/common_base         | 2 ++
>   lib/librte_mbuf/rte_mbuf.c | 8 ++++++++
>   2 files changed, 10 insertions(+)
>
> diff --git a/config/common_base b/config/common_base
> index 1af28c8..9d70cf4 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -350,6 +350,8 @@ CONFIG_RTE_RING_PAUSE_REP_COUNT=0
>   CONFIG_RTE_LIBRTE_MEMPOOL=y
>   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE=512
>   CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
> +CONFIG_RTE_MEMPOOL_HANDLER_EXT=n
> +CONFIG_RTE_MEMPOOL_HANDLER_NAME="custom_handler"
>
>   #
>   # Compile librte_mbuf
> diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
> index c18b438..42b0cd1 100644
> --- a/lib/librte_mbuf/rte_mbuf.c
> +++ b/lib/librte_mbuf/rte_mbuf.c
> @@ -167,10 +167,18 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
>   	mbp_priv.mbuf_data_room_size = data_room_size;
>   	mbp_priv.mbuf_priv_size = priv_size;
>
> +#ifdef RTE_MEMPOOL_HANDLER_EXT
> +	return rte_mempool_create_ext(name, n, elt_size,
> +		cache_size, sizeof(struct rte_pktmbuf_pool_private),
> +		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
> +		socket_id, 0,
> +		RTE_MEMPOOL_HANDLER_NAME);
> +#else
>   	return rte_mempool_create(name, n, elt_size,
>   		cache_size, sizeof(struct rte_pktmbuf_pool_private),
>   		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
>   		socket_id, 0);
> +#endif
>   }
>
>   /* do some sanity checks on a mbuf: panic if it fails */
>

This kind of thing really has to be run-time configurable, not a library 
build-time option.

	- Panu -

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v3 0/4] external mempool manager
  2016-03-09  9:50   ` [PATCH v3 0/4] " David Hunt
                       ` (3 preceding siblings ...)
  2016-03-09  9:50     ` [PATCH v3 4/4] mempool: add in the RTE_NEXT_ABI for ABI breakages David Hunt
@ 2016-03-09 11:10     ` Hunt, David
  2016-04-11 22:46     ` Yuanhan Liu
  2016-04-14 13:57     ` [PATCH v4 0/3] " Olivier Matz
  6 siblings, 0 replies; 238+ messages in thread
From: Hunt, David @ 2016-03-09 11:10 UTC (permalink / raw)
  To: dev



On 3/9/2016 9:50 AM, David Hunt wrote:

>   * removed stack hanler, may re-introduce at a later date
>

Some comments regarding this have made good points to keep this handler 
in. Will do in v4.

Regards,
David.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v3 4/4] mempool: add in the RTE_NEXT_ABI for ABI breakages
  2016-03-09 10:46       ` Panu Matilainen
@ 2016-03-09 11:30         ` Hunt, David
  2016-03-09 14:59           ` Olivier MATZ
  0 siblings, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-03-09 11:30 UTC (permalink / raw)
  To: Panu Matilainen, dev

Hi Panu,

On 3/9/2016 10:46 AM, Panu Matilainen wrote:
> On 03/09/2016 11:50 AM, David Hunt wrote:
>> This patch is for those people who want to be easily able to switch
>> between the new mempool layout and the old. Change the value of
>> RTE_NEXT_ABI in common_base config file
>
> I guess the idea here is to document how to switch between the ABIs 
> but to me this reads as if this patch is supposed to change the value 
> in common_base. Of course there's  no such change included (nor should 
> there be) here, but the description could use some fine-tuning perhaps.
>

You're right, I'll clarify the comments. v4 due soon.

>>
>> v3: Updated to take re-work of file layouts into consideration
>>
>> v2: Kept all the NEXT_ABI defs to this patch so as to make the
>> previous patches easier to read, and also to imake it clear what
>> code is necessary to keep ABI compatibility when NEXT_ABI is
>> disabled.
>
> Maybe its just me, but:
> I can see why NEXT_ABI is in a separate patch for review purposes but 
> for final commit this split doesn't seem right to me. In any case its 
> quite a large change for NEXT_ABI.
>

The patch basically re-introduces the old (pre-mempool) code as the 
refactoring of the code would have made the NEXT_ABI additions totally 
unreadable. I think this way is the lesser of two evils.

> In any case, you should add a deprecation notice for the oncoming ABI 
> break in 16.07.
>

Sure, I'll add that in v4.


>     - Panu -
>
Thanks for the comments,
Regards,
David.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v3 3/4] mempool: allow rte_pktmbuf_pool_create switch between memool handlers
  2016-03-09 10:54       ` Panu Matilainen
@ 2016-03-09 11:38         ` Hunt, David
  2016-03-09 11:44           ` Panu Matilainen
  0 siblings, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-03-09 11:38 UTC (permalink / raw)
  To: Panu Matilainen, dev

Hi Panu,

On 3/9/2016 10:54 AM, Panu Matilainen wrote:
> On 03/09/2016 11:50 AM, David Hunt wrote:
>> If the user wants to have rte_pktmbuf_pool_create() use an external 
>> mempool
>> handler, they define RTE_MEMPOOL_HANDLER_NAME to be the name of the
>> mempool handler they wish to use, and change RTE_MEMPOOL_HANDLER_EXT 
>> to 'y'
>>
>> Signed-off-by: David Hunt <david.hunt@intel.com>
>> ---
>>   config/common_base         | 2 ++
>>   lib/librte_mbuf/rte_mbuf.c | 8 ++++++++
>>   2 files changed, 10 insertions(+)
>>
>> diff --git a/config/common_base b/config/common_base
>> index 1af28c8..9d70cf4 100644
>> --- a/config/common_base
>> +++ b/config/common_base
>> @@ -350,6 +350,8 @@ CONFIG_RTE_RING_PAUSE_REP_COUNT=0
>>   CONFIG_RTE_LIBRTE_MEMPOOL=y
>>   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE=512
>>   CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
>> +CONFIG_RTE_MEMPOOL_HANDLER_EXT=n
>> +CONFIG_RTE_MEMPOOL_HANDLER_NAME="custom_handler"
>>
>>   #
>>   # Compile librte_mbuf
>> diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
>> index c18b438..42b0cd1 100644
>> --- a/lib/librte_mbuf/rte_mbuf.c
>> +++ b/lib/librte_mbuf/rte_mbuf.c
>> @@ -167,10 +167,18 @@ rte_pktmbuf_pool_create(const char *name, 
>> unsigned n,
>>       mbp_priv.mbuf_data_room_size = data_room_size;
>>       mbp_priv.mbuf_priv_size = priv_size;
>>
>> +#ifdef RTE_MEMPOOL_HANDLER_EXT
>> +    return rte_mempool_create_ext(name, n, elt_size,
>> +        cache_size, sizeof(struct rte_pktmbuf_pool_private),
>> +        rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
>> +        socket_id, 0,
>> +        RTE_MEMPOOL_HANDLER_NAME);
>> +#else
>>       return rte_mempool_create(name, n, elt_size,
>>           cache_size, sizeof(struct rte_pktmbuf_pool_private),
>>           rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
>>           socket_id, 0);
>> +#endif
>>   }
>>
>>   /* do some sanity checks on a mbuf: panic if it fails */
>>
>
> This kind of thing really has to be run-time configurable, not a 
> library build-time option.
>
>     - Panu -

Interesting point. I was attempting to minimise the amount of 
application code changes.
Would you prefer if I took out that change, and added a new
rte_pktmbuf_pool_create_ext() function which tool an extra parameter as 
the mempool handler name to use?

/* helper to create a mbuf pool using external mempool handler */
struct rte_mempool *
rte_pktmbuf_pool_create_ext(const char *name, unsigned n,
     unsigned cache_size, uint16_t priv_size, uint16_t data_room_size,
     int socket_id,  const char *handler_name)

That way we could leave the old rte_pktmbuf_pool_create() exactly as it 
is, and any apps that wanted to use an
external handler could call rte_pktmbuf_pool_create_ext()
I could do this easily enough for v4 (which I hope to get out later today)?

Thanks,
David.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v3 3/4] mempool: allow rte_pktmbuf_pool_create switch between memool handlers
  2016-03-09 11:38         ` Hunt, David
@ 2016-03-09 11:44           ` Panu Matilainen
  0 siblings, 0 replies; 238+ messages in thread
From: Panu Matilainen @ 2016-03-09 11:44 UTC (permalink / raw)
  To: Hunt, David, dev

On 03/09/2016 01:38 PM, Hunt, David wrote:
> Hi Panu,
>
> On 3/9/2016 10:54 AM, Panu Matilainen wrote:
>> On 03/09/2016 11:50 AM, David Hunt wrote:
>>> If the user wants to have rte_pktmbuf_pool_create() use an external
>>> mempool
>>> handler, they define RTE_MEMPOOL_HANDLER_NAME to be the name of the
>>> mempool handler they wish to use, and change RTE_MEMPOOL_HANDLER_EXT
>>> to 'y'
>>>
>>> Signed-off-by: David Hunt <david.hunt@intel.com>
>>> ---
>>>   config/common_base         | 2 ++
>>>   lib/librte_mbuf/rte_mbuf.c | 8 ++++++++
>>>   2 files changed, 10 insertions(+)
>>>
>>> diff --git a/config/common_base b/config/common_base
>>> index 1af28c8..9d70cf4 100644
>>> --- a/config/common_base
>>> +++ b/config/common_base
>>> @@ -350,6 +350,8 @@ CONFIG_RTE_RING_PAUSE_REP_COUNT=0
>>>   CONFIG_RTE_LIBRTE_MEMPOOL=y
>>>   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE=512
>>>   CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
>>> +CONFIG_RTE_MEMPOOL_HANDLER_EXT=n
>>> +CONFIG_RTE_MEMPOOL_HANDLER_NAME="custom_handler"
>>>
>>>   #
>>>   # Compile librte_mbuf
>>> diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
>>> index c18b438..42b0cd1 100644
>>> --- a/lib/librte_mbuf/rte_mbuf.c
>>> +++ b/lib/librte_mbuf/rte_mbuf.c
>>> @@ -167,10 +167,18 @@ rte_pktmbuf_pool_create(const char *name,
>>> unsigned n,
>>>       mbp_priv.mbuf_data_room_size = data_room_size;
>>>       mbp_priv.mbuf_priv_size = priv_size;
>>>
>>> +#ifdef RTE_MEMPOOL_HANDLER_EXT
>>> +    return rte_mempool_create_ext(name, n, elt_size,
>>> +        cache_size, sizeof(struct rte_pktmbuf_pool_private),
>>> +        rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
>>> +        socket_id, 0,
>>> +        RTE_MEMPOOL_HANDLER_NAME);
>>> +#else
>>>       return rte_mempool_create(name, n, elt_size,
>>>           cache_size, sizeof(struct rte_pktmbuf_pool_private),
>>>           rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
>>>           socket_id, 0);
>>> +#endif
>>>   }
>>>
>>>   /* do some sanity checks on a mbuf: panic if it fails */
>>>
>>
>> This kind of thing really has to be run-time configurable, not a
>> library build-time option.
>>
>>     - Panu -
>
> Interesting point. I was attempting to minimise the amount of
> application code changes.

The problem with such build options is that the feature is for all 
practical purposes unusable in a distro setting where DPDK is just 
another shared library used by multiple applications.

> Would you prefer if I took out that change, and added a new
> rte_pktmbuf_pool_create_ext() function which tool an extra parameter as
> the mempool handler name to use?
>
> /* helper to create a mbuf pool using external mempool handler */
> struct rte_mempool *
> rte_pktmbuf_pool_create_ext(const char *name, unsigned n,
>      unsigned cache_size, uint16_t priv_size, uint16_t data_room_size,
>      int socket_id,  const char *handler_name)
>
> That way we could leave the old rte_pktmbuf_pool_create() exactly as it
> is, and any apps that wanted to use an
> external handler could call rte_pktmbuf_pool_create_ext()
> I could do this easily enough for v4 (which I hope to get out later today)?

Yes, that's the way to do it. Thanks.

	- Panu -



> Thanks,
> David.
>
>
>
>
>

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH 2/6] mempool: add stack (lifo) based external mempool handler
  2016-03-08 20:45       ` Venkatesan, Venky
@ 2016-03-09 14:53         ` Olivier MATZ
  0 siblings, 0 replies; 238+ messages in thread
From: Olivier MATZ @ 2016-03-09 14:53 UTC (permalink / raw)
  To: Venkatesan, Venky, Hunt, David, dev

Hi,

>> Hi David,
>>
>> On 02/16/2016 03:48 PM, David Hunt wrote:
>>> adds a simple stack based mempool handler
>>>
>>> Signed-off-by: David Hunt <david.hunt@intel.com>
>>> ---
>>>  lib/librte_mempool/Makefile            |   2 +-
>>>  lib/librte_mempool/rte_mempool.c       |   4 +-
>>>  lib/librte_mempool/rte_mempool.h       |   1 +
>>>  lib/librte_mempool/rte_mempool_stack.c | 164
>>> +++++++++++++++++++++++++++++++++
>>>  4 files changed, 169 insertions(+), 2 deletions(-)  create mode
>>> 100644 lib/librte_mempool/rte_mempool_stack.c
>>>
>>
>> I don't get what is the purpose of this handler. Is it an example or is it
>> something that could be useful for dpdk applications?
>>
> This is actually something that is useful for pipelining apps,
> where the mempool cache doesn't really work - example, where we
> have one core doing rx (and alloc), and another core doing
> Tx (and return). In such a case, the mempool ring simply cycles
> through all the mbufs, resulting in a LLC miss on every mbuf
> allocated when the number of mbufs is large. A stack recycles
> buffers more effectively in this case.
> 

While I agree on the principle, if this is the case the commit should
come with an explanation about when this handler should be used, a
small test report showing the performance numbers and probably an
example app.

Also, I think there is a some room for optimizations, especially I
don't think that the spinlock will scale with many cores.

Regards,
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v3 4/4] mempool: add in the RTE_NEXT_ABI for ABI breakages
  2016-03-09 11:30         ` Hunt, David
@ 2016-03-09 14:59           ` Olivier MATZ
  2016-03-09 16:28             ` Hunt, David
  0 siblings, 1 reply; 238+ messages in thread
From: Olivier MATZ @ 2016-03-09 14:59 UTC (permalink / raw)
  To: Hunt, David, Panu Matilainen, dev

Hi David,

On 03/09/2016 12:30 PM, Hunt, David wrote:
> Hi Panu,
> 
> On 3/9/2016 10:46 AM, Panu Matilainen wrote:
>> On 03/09/2016 11:50 AM, David Hunt wrote:
>>> This patch is for those people who want to be easily able to switch
>>> between the new mempool layout and the old. Change the value of
>>> RTE_NEXT_ABI in common_base config file
>>
>> I guess the idea here is to document how to switch between the ABIs
>> but to me this reads as if this patch is supposed to change the value
>> in common_base. Of course there's  no such change included (nor should
>> there be) here, but the description could use some fine-tuning perhaps.
>>
> 
> You're right, I'll clarify the comments. v4 due soon.
> 
>>>
>>> v3: Updated to take re-work of file layouts into consideration
>>>
>>> v2: Kept all the NEXT_ABI defs to this patch so as to make the
>>> previous patches easier to read, and also to imake it clear what
>>> code is necessary to keep ABI compatibility when NEXT_ABI is
>>> disabled.
>>
>> Maybe its just me, but:
>> I can see why NEXT_ABI is in a separate patch for review purposes but
>> for final commit this split doesn't seem right to me. In any case its
>> quite a large change for NEXT_ABI.
>>
> 
> The patch basically re-introduces the old (pre-mempool) code as the
> refactoring of the code would have made the NEXT_ABI additions totally
> unreadable. I think this way is the lesser of two evils.
> 
>> In any case, you should add a deprecation notice for the oncoming ABI
>> break in 16.07.
>>
> 
> Sure, I'll add that in v4.

Sorry, maybe I wasn't very clear in my previous messages. For me, the
NEXT_ABI is not the proper solution because, as Panu stated, it makes
the patch hard to read. My understanding of NEXT_ABI is that it should
only be used if the changes are small enough. Duplicating the code with
a big #ifdef NEXT_ABI is not an option to me either.

So that's why the deprecation notice should be used instead. But in this
case, it means that this patch won't be present in 16.04, but will be
added in 16.07.

Regards,
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v3 4/4] mempool: add in the RTE_NEXT_ABI for ABI breakages
  2016-03-09 14:59           ` Olivier MATZ
@ 2016-03-09 16:28             ` Hunt, David
  2016-03-09 16:31               ` Olivier MATZ
  0 siblings, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-03-09 16:28 UTC (permalink / raw)
  To: Olivier MATZ, Panu Matilainen, dev

Hi Olivier,

On 3/9/2016 2:59 PM, Olivier MATZ wrote:
> Hi David,
>
> On 03/09/2016 12:30 PM, Hunt, David wrote:
>> Hi Panu,
>>
>> On 3/9/2016 10:46 AM, Panu Matilainen wrote:
>>> On 03/09/2016 11:50 AM, David Hunt wrote:
>>>> This patch is for those people who want to be easily able to switch
>>>> between the new mempool layout and the old. Change the value of
>>>> RTE_NEXT_ABI in common_base config file
>>> I guess the idea here is to document how to switch between the ABIs
>>> but to me this reads as if this patch is supposed to change the value
>>> in common_base. Of course there's  no such change included (nor should
>>> there be) here, but the description could use some fine-tuning perhaps.
>>>
>> You're right, I'll clarify the comments. v4 due soon.
>>
>>>> v3: Updated to take re-work of file layouts into consideration
>>>>
>>>> v2: Kept all the NEXT_ABI defs to this patch so as to make the
>>>> previous patches easier to read, and also to imake it clear what
>>>> code is necessary to keep ABI compatibility when NEXT_ABI is
>>>> disabled.
>>> Maybe its just me, but:
>>> I can see why NEXT_ABI is in a separate patch for review purposes but
>>> for final commit this split doesn't seem right to me. In any case its
>>> quite a large change for NEXT_ABI.
>>>
>> The patch basically re-introduces the old (pre-mempool) code as the
>> refactoring of the code would have made the NEXT_ABI additions totally
>> unreadable. I think this way is the lesser of two evils.
>>
>>> In any case, you should add a deprecation notice for the oncoming ABI
>>> break in 16.07.
>>>
>> Sure, I'll add that in v4.
> Sorry, maybe I wasn't very clear in my previous messages. For me, the
> NEXT_ABI is not the proper solution because, as Panu stated, it makes
> the patch hard to read. My understanding of NEXT_ABI is that it should
> only be used if the changes are small enough. Duplicating the code with
> a big #ifdef NEXT_ABI is not an option to me either.
>
> So that's why the deprecation notice should be used instead. But in this
> case, it means that this patch won't be present in 16.04, but will be
> added in 16.07.
>
> Regards,
> Olivier

Sure, v4 will remove the NEXT_ABI patch , and replace it with just the 
ABI break announcement for 16.07. For anyone who what's to try out the 
patch, they can always get it from patchwork, but not as part 16.04.

Thanks,
David.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v3 4/4] mempool: add in the RTE_NEXT_ABI for ABI breakages
  2016-03-09 16:28             ` Hunt, David
@ 2016-03-09 16:31               ` Olivier MATZ
  2016-03-09 16:39                 ` Hunt, David
  0 siblings, 1 reply; 238+ messages in thread
From: Olivier MATZ @ 2016-03-09 16:31 UTC (permalink / raw)
  To: Hunt, David, Panu Matilainen, dev

Hi David,

On 03/09/2016 05:28 PM, Hunt, David wrote:
>> Sorry, maybe I wasn't very clear in my previous messages. For me, the
>> NEXT_ABI is not the proper solution because, as Panu stated, it makes
>> the patch hard to read. My understanding of NEXT_ABI is that it should
>> only be used if the changes are small enough. Duplicating the code with
>> a big #ifdef NEXT_ABI is not an option to me either.
>>
>> So that's why the deprecation notice should be used instead. But in this
>> case, it means that this patch won't be present in 16.04, but will be
>> added in 16.07.
>>
> Sure, v4 will remove the NEXT_ABI patch , and replace it with just the
> ABI break announcement for 16.07. For anyone who what's to try out the
> patch, they can always get it from patchwork, but not as part 16.04.

I think it's better to have the deprecation notice in a separate
mail, outside of the patch series, so Thomas can just apply this
one and let the series pending for 16.07.

Thanks,
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v3 4/4] mempool: add in the RTE_NEXT_ABI for ABI breakages
  2016-03-09 16:31               ` Olivier MATZ
@ 2016-03-09 16:39                 ` Hunt, David
  0 siblings, 0 replies; 238+ messages in thread
From: Hunt, David @ 2016-03-09 16:39 UTC (permalink / raw)
  To: Olivier MATZ, Panu Matilainen, dev

Hi Olivier,

On 3/9/2016 4:31 PM, Olivier MATZ wrote:
> Hi David,
>
> On 03/09/2016 05:28 PM, Hunt, David wrote:
>
>> Sure, v4 will remove the NEXT_ABI patch , and replace it with just the
>> ABI break announcement for 16.07. For anyone who what's to try out the
>> patch, they can always get it from patchwork, but not as part 16.04.
> I think it's better to have the deprecation notice in a separate
> mail, outside of the patch series, so Thomas can just apply this
> one and let the series pending for 16.07.
>
> Thanks,
> Olivier

Yes, sure, makes perfect sense.

Thanks,
David.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v3 0/4] external mempool manager
  2016-03-09  9:50   ` [PATCH v3 0/4] " David Hunt
                       ` (4 preceding siblings ...)
  2016-03-09 11:10     ` [PATCH v3 0/4] external mempool manager Hunt, David
@ 2016-04-11 22:46     ` Yuanhan Liu
  2016-04-14 13:57     ` [PATCH v4 0/3] " Olivier Matz
  6 siblings, 0 replies; 238+ messages in thread
From: Yuanhan Liu @ 2016-04-11 22:46 UTC (permalink / raw)
  To: David Hunt; +Cc: dev

On Wed, Mar 09, 2016 at 09:50:33AM +0000, David Hunt wrote:
...
> The external mempool manager needs to provide the following functions.
>  1. alloc     - allocates the mempool memory, and adds each object onto a ring
>  2. put       - puts an object back into the mempool once an application has
>                 finished with it
>  3. get       - gets an object from the mempool for use by the application
>  4. get_count - gets the number of available objects in the mempool
>  5. free      - frees the mempool memory

It's a lengthy and great description, and it's a pity that you don't
include it in the commit log: cover letter will not be in the history.

> 
> For and example of a simple malloc based mempool manager, see
> lib/librte_mempool/custom_mempool.c

I didn't see this file. Forgot to include it?

	--yliu

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v3 1/4] mempool: add external mempool manager support
  2016-03-09  9:50     ` [PATCH v3 1/4] mempool: add external mempool manager support David Hunt
@ 2016-04-11 22:52       ` Yuanhan Liu
  0 siblings, 0 replies; 238+ messages in thread
From: Yuanhan Liu @ 2016-04-11 22:52 UTC (permalink / raw)
  To: David Hunt; +Cc: dev

Hi David,

On Wed, Mar 09, 2016 at 09:50:34AM +0000, David Hunt wrote:
> -static struct rte_tailq_elem rte_mempool_tailq = {
> +struct rte_tailq_elem rte_mempool_tailq = {

Why removing static? I didn't see it's referenced somewhere else.


> +	if (flags && MEMPOOL_F_INT_HANDLER) {

I would assume it's "flags & MEMPOOL_F_INT_HANDLER". BTW, you might want
to do a thorough check, as I found few more typos like this.

	--yliu

> +	if (flags && MEMPOOL_F_INT_HANDLER) {
> +
> +		if (rte_eal_has_hugepages()) {
> +			startaddr = (void *)mz->addr;
> +		} else {
> +			/* align memory pool start address on a page boundary */
> +			unsigned long addr = (unsigned long)mz->addr;
> +
> +			if (addr & (page_size - 1)) {
> +				addr += page_size;
> +				addr &= ~(page_size - 1);
> +			}
> +			startaddr = (void *)addr;
>  		}
> -		startaddr = (void*)addr;
> +	} else {
> +		startaddr = (void *)mz->addr;
>  	}

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v4 0/3] external mempool manager
  2016-03-09  9:50   ` [PATCH v3 0/4] " David Hunt
                       ` (5 preceding siblings ...)
  2016-04-11 22:46     ` Yuanhan Liu
@ 2016-04-14 13:57     ` Olivier Matz
  2016-04-14 13:57       ` [PATCH v4 1/3] mempool: support external handler Olivier Matz
                         ` (3 more replies)
  6 siblings, 4 replies; 238+ messages in thread
From: Olivier Matz @ 2016-04-14 13:57 UTC (permalink / raw)
  To: dev, david.hunt; +Cc: yuanhan.liu, pmatilai

Here's a reworked version of the patch initially sent by David Hunt.
The main change is that it is rebased on top of the "mempool: rework
memory allocation" series [1], which simplifies a lot the first patch.

[1] http://dpdk.org/ml/archives/dev/2016-April/037464.html

v4 changes:
 * remove the rte_mempool_create_ext() function. To change the handler, the
   user has to do the following:
   - mp = rte_mempool_create_empty()
   - rte_mempool_set_handler(mp, "my_handler")
   - rte_mempool_populate_default(mp)
   This avoids to add another function with more than 10 arguments, duplicating
   the doxygen comments
 * change the api of rte_mempool_alloc_t: only the mempool pointer is required
   as all information is available in it
 * change the api of rte_mempool_free_t: remove return value
 * move inline wrapper functions from the .c to the .h (else they won't be
   inlined). This implies to have one header file (rte_mempool.h), or it
   would have generate cross dependencies issues.
 * remove now unused MEMPOOL_F_INT_HANDLER (note: it was misused anyway due
   to the use of && instead of &)
 * fix build in debug mode (__MEMPOOL_STAT_ADD(mp, put_pool, n) remaining)
 * fix build with shared libraries (global handler has to be declared in
   the .map file)
 * rationalize #include order
 * remove unused function rte_mempool_get_handler_name()
 * rename some structures, fields, functions
 * remove the static in front of rte_tailq_elem rte_mempool_tailq (comment
   from Yuanhan)
 * test the ext mempool handler in the same file than standard mempool tests,
   avoiding to duplicate the code
 * rework the custom handler in mempool_test
 * rework a bit the patch selecting default mbuf pool handler
 * fix some doxygen comments

Things that should still be discussed:

- Panu pointed out that having a compile-time configuration
  option for selecting the default mbuf handler is not a good idea.
  I mostly agree, except in one case (and that's why I kept this patch):
  if a specific architecture has its own way to provide an efficient
  pool handler for mbufs, it could be the proper place to have this
  option. But as far as I know, there is no such architecture today
  in dpdk.

- The other question I would like to raise is about the use cases.
  The cover letter below could be a bit more explicit about what this
  feature will be used for.



This is the initial unmodified cover letter from David Hunt:

Hi list.

Here's the v3 version patch for an external mempool manager

v3 changes:
 * simplified the file layout, renamed to rte_mempool_handler.[hc]
 * moved the default handlers into rte_mempool_default.c
 * moved the example handler out into app/test/test_ext_mempool.c
 * removed is_mc/is_mp change, slight perf degredation on sp cached operation
 * removed stack hanler, may re-introduce at a later date
 * Changes out of code reviews

v2 changes:
 * There was a lot of duplicate code between rte_mempool_xmem_create and
   rte_mempool_create_ext. This has now been refactored and is now
   hopefully cleaner.
 * The RTE_NEXT_ABI define is now used to allow building of the library
   in a format that is compatible with binaries built against previous
   versions of DPDK.
 * Changes out of code reviews. Hopefully I've got most of them included.

The External Mempool Manager is an extension to the mempool API that allows
users to add and use an external mempool manager, which allows external memory
subsystems such as external hardware memory management systems and software
based memory allocators to be used with DPDK.

The existing API to the internal DPDK mempool manager will remain unchanged
and will be backward compatible. However, there will be an ABI breakage, as
the mempool struct is changing. These changes are all contained withing
RTE_NEXT_ABI defs, and the current or next code can be changed with
the CONFIG_RTE_NEXT_ABI config setting

There are two aspects to external mempool manager.
  1. Adding the code for your new mempool handler. This is achieved by adding a
     new mempool handler source file into the librte_mempool library, and
     using the REGISTER_MEMPOOL_HANDLER macro.
  2. Using the new API to call rte_mempool_create_ext to create a new mempool
     using the name parameter to identify which handler to use.

New API calls added
 1. A new mempool 'create' function which accepts mempool handler name.
 2. A new mempool 'rte_get_mempool_handler' function which accepts mempool
    handler name, and returns the index to the relevant set of callbacks for
    that mempool handler

Several external mempool managers may be used in the same application. A new
mempool can then be created by using the new 'create' function, providing the
mempool handler name to point the mempool to the relevant mempool manager
callback structure.

The old 'create' function can still be called by legacy programs, and will
internally work out the mempool handle based on the flags provided (single
producer, single consumer, etc). By default handles are created internally to
implement the built-in DPDK mempool manager and mempool types.

The external mempool manager needs to provide the following functions.
 1. alloc     - allocates the mempool memory, and adds each object onto a ring
 2. put       - puts an object back into the mempool once an application has
                finished with it
 3. get       - gets an object from the mempool for use by the application
 4. get_count - gets the number of available objects in the mempool
 5. free      - frees the mempool memory

Every time a get/put/get_count is called from the application/PMD, the
callback for that mempool is called. These functions are in the fastpath,
and any unoptimised handlers may limit performance.

The new APIs are as follows:

1. rte_mempool_create_ext

struct rte_mempool *
rte_mempool_create_ext(const char * name, unsigned n,
        unsigned cache_size, unsigned private_data_size,
        int socket_id, unsigned flags,
        const char * handler_name);

2. rte_mempool_get_handler_name

char *
rte_mempool_get_handler_name(struct rte_mempool *mp);

Please see rte_mempool.h for further information on the parameters.


The important thing to note is that the mempool handler is passed by name
to rte_mempool_create_ext, and that in turn calls rte_get_mempool_handler to
get the handler index, which is stored in the rte_memool structure. This
allow multiple processes to use the same mempool, as the function pointers
are accessed via handler index.

The mempool handler structure contains callbacks to the implementation of
the handler, and is set up for registration as follows:

static struct rte_mempool_handler handler_sp_mc = {
    .name = "ring_sp_mc",
    .alloc = rte_mempool_common_ring_alloc,
    .put = common_ring_sp_put,
    .get = common_ring_mc_get,
    .get_count = common_ring_get_count,
    .free = common_ring_free,
};

And then the following macro will register the handler in the array of handlers

REGISTER_MEMPOOL_HANDLER(handler_mp_mc);

For and example of a simple malloc based mempool manager, see
lib/librte_mempool/custom_mempool.c

For an example of API usage, please see app/test/test_ext_mempool.c, which
implements a rudimentary mempool manager using simple mallocs for each
mempool object. This file also contains the callbacks and self registration
for the new handler.

David Hunt (2):
  mempool: support external handler
  mbuf: get default mempool handler from configuration

Olivier Matz (1):
  app/test: test external mempool handler

 app/test/test_mempool.c                    | 113 +++++++++++++++
 app/test/test_mempool_perf.c               |   1 -
 config/common_base                         |   1 +
 lib/librte_mbuf/rte_mbuf.c                 |  21 ++-
 lib/librte_mempool/Makefile                |   2 +
 lib/librte_mempool/rte_mempool.c           |  72 ++++------
 lib/librte_mempool/rte_mempool.h           | 212 +++++++++++++++++++++++++----
 lib/librte_mempool/rte_mempool_default.c   | 147 ++++++++++++++++++++
 lib/librte_mempool/rte_mempool_handler.c   | 139 +++++++++++++++++++
 lib/librte_mempool/rte_mempool_version.map |   4 +
 10 files changed, 637 insertions(+), 75 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_default.c
 create mode 100644 lib/librte_mempool/rte_mempool_handler.c

-- 
2.1.4

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v4 1/3] mempool: support external handler
  2016-04-14 13:57     ` [PATCH v4 0/3] " Olivier Matz
@ 2016-04-14 13:57       ` Olivier Matz
  2016-04-14 13:57       ` [PATCH v4 2/3] app/test: test external mempool handler Olivier Matz
                         ` (2 subsequent siblings)
  3 siblings, 0 replies; 238+ messages in thread
From: Olivier Matz @ 2016-04-14 13:57 UTC (permalink / raw)
  To: dev, david.hunt; +Cc: yuanhan.liu, pmatilai

From: David Hunt <david.hunt@intel.com>

Until now, the objects stored in mempool mempool were internally stored a
ring. This patch introduce the possibility to register external handlers
replacing the ring.

The default behavior remains unchanged, but calling the new function
rte_mempool_set_handler() right after rte_mempool_create_empty() allows to
change the handler that will be used when populating the mempool.

Signed-off-by: David Hunt <david.hunt@intel.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test/test_mempool_perf.c               |   1 -
 lib/librte_mempool/Makefile                |   2 +
 lib/librte_mempool/rte_mempool.c           |  72 ++++------
 lib/librte_mempool/rte_mempool.h           | 212 +++++++++++++++++++++++++----
 lib/librte_mempool/rte_mempool_default.c   | 147 ++++++++++++++++++++
 lib/librte_mempool/rte_mempool_handler.c   | 139 +++++++++++++++++++
 lib/librte_mempool/rte_mempool_version.map |   4 +
 7 files changed, 506 insertions(+), 71 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_default.c
 create mode 100644 lib/librte_mempool/rte_mempool_handler.c

diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index cdc02a0..091c1df 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
 							   n_get_bulk);
 				if (unlikely(ret < 0)) {
 					rte_mempool_dump(stdout, mp);
-					rte_ring_dump(stdout, mp->ring);
 					/* in this case, objects are lost... */
 					return -1;
 				}
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 43423e0..f19366e 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -42,6 +42,8 @@ LIBABIVER := 2
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_handler.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
 # install includes
 SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
 
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 7104a41..9e9a7fc 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -148,7 +148,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, phys_addr_t physaddr)
 #endif
 
 	/* enqueue in ring */
-	rte_ring_sp_enqueue(mp->ring, obj);
+	rte_mempool_ext_put_bulk(mp, &obj, 1);
 }
 
 /* call obj_cb() for each mempool element */
@@ -300,39 +300,6 @@ rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	return (size_t)paddr_idx << pg_shift;
 }
 
-/* create the internal ring */
-static int
-rte_mempool_ring_create(struct rte_mempool *mp)
-{
-	int rg_flags = 0, ret;
-	char rg_name[RTE_RING_NAMESIZE];
-	struct rte_ring *r;
-
-	ret = snprintf(rg_name, sizeof(rg_name),
-		RTE_MEMPOOL_MZ_FORMAT, mp->name);
-	if (ret < 0 || ret >= (int)sizeof(rg_name))
-		return -ENAMETOOLONG;
-
-	/* ring flags */
-	if (mp->flags & MEMPOOL_F_SP_PUT)
-		rg_flags |= RING_F_SP_ENQ;
-	if (mp->flags & MEMPOOL_F_SC_GET)
-		rg_flags |= RING_F_SC_DEQ;
-
-	/* Allocate the ring that will be used to store objects.
-	 * Ring functions will return appropriate errors if we are
-	 * running as a secondary process etc., so no checks made
-	 * in this function for that condition. */
-	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
-		mp->socket_id, rg_flags);
-	if (r == NULL)
-		return -rte_errno;
-
-	mp->ring = r;
-	mp->flags |= MEMPOOL_F_RING_CREATED;
-	return 0;
-}
-
 /* free a memchunk allocated with rte_memzone_reserve() */
 static void
 rte_mempool_memchunk_mz_free(__rte_unused struct rte_mempool_memhdr *memhdr,
@@ -350,7 +317,7 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
 	void *elt;
 
 	while (!STAILQ_EMPTY(&mp->elt_list)) {
-		rte_ring_sc_dequeue(mp->ring, &elt);
+		rte_mempool_ext_get_bulk(mp, &elt, 1);
 		(void)elt;
 		STAILQ_REMOVE_HEAD(&mp->elt_list, next);
 		mp->populated_size--;
@@ -378,15 +345,18 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	unsigned i = 0;
 	size_t off;
 	struct rte_mempool_memhdr *memhdr;
-	int ret;
 
 	/* create the internal ring if not already done */
 	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
-		ret = rte_mempool_ring_create(mp);
-		if (ret < 0)
-			return ret;
+		rte_errno = 0;
+		mp->pool = rte_mempool_ext_alloc(mp);
+		if (mp->pool == NULL) {
+			if (rte_errno == 0)
+				return -EINVAL;
+			else
+				return -rte_errno;
+		}
 	}
-
 	/* mempool is already populated */
 	if (mp->populated_size >= mp->size)
 		return -ENOSPC;
@@ -695,7 +665,7 @@ rte_mempool_free(struct rte_mempool *mp)
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
 
 	rte_mempool_free_memchunks(mp);
-	rte_ring_free(mp->ring);
+	rte_mempool_ext_free(mp);
 	rte_memzone_free(mp->mz);
 }
 
@@ -807,6 +777,20 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 		RTE_PTR_ADD(mp, MEMPOOL_HEADER_SIZE(mp, 0));
 
 	te->data = mp;
+
+	/*
+	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
+	 * set the correct index into the handler table.
+	 */
+	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
+		rte_mempool_set_handler(mp, "ring_sp_sc");
+	else if (flags & MEMPOOL_F_SP_PUT)
+		rte_mempool_set_handler(mp, "ring_sp_mc");
+	else if (flags & MEMPOOL_F_SC_GET)
+		rte_mempool_set_handler(mp, "ring_mp_sc");
+	else
+		rte_mempool_set_handler(mp, "ring_mp_mc");
+
 	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
 	TAILQ_INSERT_TAIL(mempool_list, te, next);
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
@@ -922,7 +906,7 @@ rte_mempool_count(const struct rte_mempool *mp)
 	unsigned count;
 	unsigned lcore_id;
 
-	count = rte_ring_count(mp->ring);
+	count = rte_mempool_ext_get_count(mp);
 
 	if (mp->cache_size == 0)
 		return count;
@@ -1111,7 +1095,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 
 	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
 	fprintf(f, "  flags=%x\n", mp->flags);
-	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
+	fprintf(f, "  pool=%p\n", mp->pool);
 	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->mz->phys_addr);
 	fprintf(f, "  nb_mem_chunks=%u\n", mp->nb_mem_chunks);
 	fprintf(f, "  size=%"PRIu32"\n", mp->size);
@@ -1132,7 +1116,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 	}
 
 	cache_count = rte_mempool_dump_cache(f, mp);
-	common_count = rte_ring_count(mp->ring);
+	common_count = rte_mempool_ext_get_count(mp);
 	if ((cache_count + common_count) > mp->size)
 		common_count = mp->size - cache_count;
 	fprintf(f, "  common_pool_count=%u\n", common_count);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 96bd047..d77a246 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -67,6 +67,7 @@
 #include <inttypes.h>
 #include <sys/queue.h>
 
+#include <rte_spinlock.h>
 #include <rte_log.h>
 #include <rte_debug.h>
 #include <rte_lcore.h>
@@ -203,7 +204,15 @@ struct rte_mempool_memhdr {
  */
 struct rte_mempool {
 	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
-	struct rte_ring *ring;           /**< Ring to store objects. */
+	void *pool;                      /**< Ring or ext-pool to store objects. */
+	/**
+	 * Index into the array of structs containing callback fn pointers.
+	 * We're using an index here rather than pointers to the callbacks
+	 * to facilitate any secondary processes that may want to use
+	 * this mempool. Any function pointers stored in the mempool
+	 * directly would not be valid for secondary processes.
+	 */
+	int32_t handler_idx;
 	const struct rte_memzone *mz;    /**< Memzone where mempool is allocated */
 	int flags;                       /**< Flags of the mempool. */
 	int socket_id;                   /**< Socket id passed at mempool creation. */
@@ -322,6 +331,175 @@ void __mempool_check_cookies(const struct rte_mempool *mp,
 #define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
 #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
 
+#define RTE_MEMPOOL_HANDLER_NAMESIZE 32 /**< Max length of handler name. */
+
+/** Allocate the external pool. */
+typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp);
+
+/** Free the external pool. */
+typedef void (*rte_mempool_free_t)(void *p);
+
+/** Put an object in the external pool. */
+typedef int (*rte_mempool_put_t)(void *p, void * const *obj_table, unsigned n);
+
+/** Get an object from the external pool. */
+typedef int (*rte_mempool_get_t)(void *p, void **obj_table, unsigned n);
+
+/** Return the number of available objects in the external pool. */
+typedef unsigned (*rte_mempool_get_count)(void *p);
+
+/** Structure defining a mempool handler. */
+struct rte_mempool_handler {
+	char name[RTE_MEMPOOL_HANDLER_NAMESIZE]; /**< Name of mempool handler */
+	rte_mempool_alloc_t alloc;       /**< Allocate the external pool. */
+	rte_mempool_free_t free;         /**< Free the external pool. */
+	rte_mempool_put_t put;           /**< Put an object. */
+	rte_mempool_get_t get;           /**< Get an object. */
+	rte_mempool_get_count get_count; /**< Get the number of available objs. */
+} __rte_cache_aligned;
+
+#define RTE_MEMPOOL_MAX_HANDLER_IDX 16  /**< Max number of registered handlers */
+
+/** Structure storing the table of registered handlers. */
+struct rte_mempool_handler_table {
+	rte_spinlock_t sl;     /**< Spinlock for add/delete. */
+	uint32_t num_handlers; /**< Number of handlers in the table. */
+	/** Storage for all possible handlers. */
+	struct rte_mempool_handler handler[RTE_MEMPOOL_MAX_HANDLER_IDX];
+};
+
+/** Array of registered handlers */
+extern struct rte_mempool_handler_table rte_mempool_handler_table;
+
+/**
+ * @internal Get the mempool handler from its index.
+ *
+ * @param handler_idx
+ *   The index of the handler in the handler table. It must be a valid
+ *   index: (0 <= idx < num_handlers).
+ * @return
+ *   The pointer to the handler in the table.
+ */
+static struct rte_mempool_handler *
+rte_mempool_handler_get(int handler_idx)
+{
+	return &rte_mempool_handler_table.handler[handler_idx];
+}
+
+/**
+ * @internal wrapper for external mempool manager alloc callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   The opaque pointer to the external pool.
+ */
+void *
+rte_mempool_ext_alloc(struct rte_mempool *mp);
+
+/**
+ * @internal wrapper for external mempool manager get callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to get.
+ * @return
+ *   - 0: Success; got n objects.
+ *   - <0: Error; code of handler get function.
+ */
+static inline int
+rte_mempool_ext_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	struct rte_mempool_handler *handler;
+
+	handler = rte_mempool_handler_get(mp->handler_idx);
+	return handler->get(mp->pool, obj_table, n);
+}
+
+/**
+ * @internal wrapper for external mempool manager put callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to put.
+ * @return
+ *   - 0: Success; n objects supplied.
+ *   - <0: Error; code of handler put function.
+ */
+static inline int
+rte_mempool_ext_put_bulk(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	struct rte_mempool_handler *handler;
+
+	handler = rte_mempool_handler_get(mp->handler_idx);
+	return handler->put(mp->pool, obj_table, n);
+}
+
+/**
+ * @internal wrapper for external mempool manager get_count callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   The number of available objects in the external pool.
+ */
+unsigned
+rte_mempool_ext_get_count(const struct rte_mempool *mp);
+
+/**
+ * @internal wrapper for external mempool manager free callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ */
+void
+rte_mempool_ext_free(struct rte_mempool *mp);
+
+/**
+ * Set the handler of a mempool
+ *
+ * This can only be done on a mempool that is not populated, i.e. just after
+ * a call to rte_mempool_create_empty().
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param name
+ *   Name of the handler.
+ * @return
+ *   - 0: Sucess; the new handler is configured.
+ *   - <0: Error (errno)
+ */
+int
+rte_mempool_set_handler(struct rte_mempool *mp, const char *name);
+
+/**
+ * Register an external pool handler.
+ *
+ * @param h
+ *   Pointer to the external pool handler
+ * @return
+ *   - >=0: Sucess; return the index of the handler in the table.
+ *   - <0: Error (errno)
+ */
+int rte_mempool_handler_register(struct rte_mempool_handler *h);
+
+/**
+ * Macro to statically register an external pool handler.
+ */
+#define MEMPOOL_REGISTER_HANDLER(h)					\
+	void mp_hdlr_init_##h(void);					\
+	void __attribute__((constructor, used)) mp_hdlr_init_##h(void)	\
+	{								\
+		rte_mempool_handler_register(&h);			\
+	}
+
 /**
  * An object callback function for mempool.
  *
@@ -733,7 +911,7 @@ void rte_mempool_dump(FILE *f, struct rte_mempool *mp);
  */
 static inline void __attribute__((always_inline))
 __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
-		    unsigned n, int is_mp)
+		    unsigned n, __rte_unused int is_mp)
 {
 	struct rte_mempool_cache *cache;
 	uint32_t index;
@@ -771,7 +949,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 	cache->len += n;
 
 	if (cache->len >= flushthresh) {
-		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
+		rte_mempool_ext_put_bulk(mp, &cache->objs[cache_size],
 				cache->len - cache_size);
 		cache->len = cache_size;
 	}
@@ -779,26 +957,10 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 	return;
 
 ring_enqueue:
-
 	/* push remaining objects in ring */
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	if (is_mp) {
-		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
-	else {
-		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
-#else
-	if (is_mp)
-		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
-	else
-		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
-#endif
+	rte_mempool_ext_put_bulk(mp, obj_table, n);
 }
 
-
 /**
  * Put several objects back in the mempool (multi-producers safe).
  *
@@ -919,7 +1081,7 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
  */
 static inline int __attribute__((always_inline))
 __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
-		   unsigned n, int is_mc)
+		   unsigned n, __rte_unused int is_mc)
 {
 	int ret;
 	struct rte_mempool_cache *cache;
@@ -942,7 +1104,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 		uint32_t req = n + (cache_size - cache->len);
 
 		/* How many do we require i.e. number to fill the cache + the request */
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
+		ret = rte_mempool_ext_get_bulk(mp,
+			&cache->objs[cache->len], req);
 		if (unlikely(ret < 0)) {
 			/*
 			 * In the offchance that we are buffer constrained,
@@ -969,10 +1132,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 ring_dequeue:
 
 	/* get remaining objects from ring */
-	if (is_mc)
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
-	else
-		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
+	ret = rte_mempool_ext_get_bulk(mp, obj_table, n);
 
 	if (ret < 0)
 		__MEMPOOL_STAT_ADD(mp, get_fail, n);
diff --git a/lib/librte_mempool/rte_mempool_default.c b/lib/librte_mempool/rte_mempool_default.c
new file mode 100644
index 0000000..a6ac65a
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_default.c
@@ -0,0 +1,147 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+
+static int
+common_ring_mp_put(void *p, void * const *obj_table, unsigned n)
+{
+	return rte_ring_mp_enqueue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static int
+common_ring_sp_put(void *p, void * const *obj_table, unsigned n)
+{
+	return rte_ring_sp_enqueue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static int
+common_ring_mc_get(void *p, void **obj_table, unsigned n)
+{
+	return rte_ring_mc_dequeue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static int
+common_ring_sc_get(void *p, void **obj_table, unsigned n)
+{
+	return rte_ring_sc_dequeue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static unsigned
+common_ring_get_count(void *p)
+{
+	return rte_ring_count((struct rte_ring *)p);
+}
+
+
+static void *
+common_ring_alloc(struct rte_mempool *mp)
+{
+	int rg_flags = 0, ret;
+	char rg_name[RTE_RING_NAMESIZE];
+	struct rte_ring *r;
+
+	ret = snprintf(rg_name, sizeof(rg_name),
+		RTE_MEMPOOL_MZ_FORMAT, mp->name);
+	if (ret < 0 || ret >= (int)sizeof(rg_name)) {
+		rte_errno = ENAMETOOLONG;
+		return NULL;
+	}
+
+	/* ring flags */
+	if (mp->flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (mp->flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	/* Allocate the ring that will be used to store objects.
+	 * Ring functions will return appropriate errors if we are
+	 * running as a secondary process etc., so no checks made
+	 * in this function for that condition. */
+	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+		mp->socket_id, rg_flags);
+
+	return r;
+}
+
+static void
+common_ring_free(void *p)
+{
+	rte_ring_free((struct rte_ring *)p);
+}
+
+static struct rte_mempool_handler handler_mp_mc = {
+	.name = "ring_mp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.put = common_ring_mp_put,
+	.get = common_ring_mc_get,
+	.get_count = common_ring_get_count,
+};
+
+static struct rte_mempool_handler handler_sp_sc = {
+	.name = "ring_sp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.put = common_ring_sp_put,
+	.get = common_ring_sc_get,
+	.get_count = common_ring_get_count,
+};
+
+static struct rte_mempool_handler handler_mp_sc = {
+	.name = "ring_mp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.put = common_ring_mp_put,
+	.get = common_ring_sc_get,
+	.get_count = common_ring_get_count,
+};
+
+static struct rte_mempool_handler handler_sp_mc = {
+	.name = "ring_sp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.put = common_ring_sp_put,
+	.get = common_ring_mc_get,
+	.get_count = common_ring_get_count,
+};
+
+MEMPOOL_REGISTER_HANDLER(handler_mp_mc);
+MEMPOOL_REGISTER_HANDLER(handler_sp_sc);
+MEMPOOL_REGISTER_HANDLER(handler_mp_sc);
+MEMPOOL_REGISTER_HANDLER(handler_sp_mc);
diff --git a/lib/librte_mempool/rte_mempool_handler.c b/lib/librte_mempool/rte_mempool_handler.c
new file mode 100644
index 0000000..78611f8
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_handler.c
@@ -0,0 +1,139 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2016 6WIND S.A.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_mempool.h>
+
+/* indirect jump table to support external memory pools */
+struct rte_mempool_handler_table rte_mempool_handler_table = {
+	.sl =  RTE_SPINLOCK_INITIALIZER ,
+	.num_handlers = 0
+};
+
+/* add a new handler in rte_mempool_handler_table, return its index */
+int
+rte_mempool_handler_register(struct rte_mempool_handler *h)
+{
+	struct rte_mempool_handler *handler;
+	int16_t handler_idx;
+
+	rte_spinlock_lock(&rte_mempool_handler_table.sl);
+
+	if (rte_mempool_handler_table.num_handlers >= RTE_MEMPOOL_MAX_HANDLER_IDX) {
+		rte_spinlock_unlock(&rte_mempool_handler_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Maximum number of mempool handlers exceeded\n");
+		return -ENOSPC;
+	}
+
+	if (h->put == NULL || h->get == NULL || h->get_count == NULL) {
+		rte_spinlock_unlock(&rte_mempool_handler_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Missing callback while registering mempool handler\n");
+		return -EINVAL;
+	}
+
+	handler_idx = rte_mempool_handler_table.num_handlers++;
+	handler = &rte_mempool_handler_table.handler[handler_idx];
+	snprintf(handler->name, sizeof(handler->name), "%s", h->name);
+	handler->alloc = h->alloc;
+	handler->put = h->put;
+	handler->get = h->get;
+	handler->get_count = h->get_count;
+
+	rte_spinlock_unlock(&rte_mempool_handler_table.sl);
+
+	return handler_idx;
+}
+
+/* wrapper to allocate an external pool handler */
+void *
+rte_mempool_ext_alloc(struct rte_mempool *mp)
+{
+	struct rte_mempool_handler *handler;
+
+	handler = rte_mempool_handler_get(mp->handler_idx);
+	if (handler->alloc == NULL)
+		return NULL;
+	return handler->alloc(mp);
+}
+
+/* wrapper to free an external pool handler */
+void
+rte_mempool_ext_free(struct rte_mempool *mp)
+{
+	struct rte_mempool_handler *handler;
+
+	handler = rte_mempool_handler_get(mp->handler_idx);
+	if (handler->free == NULL)
+		return;
+	return handler->free(mp);
+}
+
+/* wrapper to get available objects in an external pool handler */
+unsigned
+rte_mempool_ext_get_count(const struct rte_mempool *mp)
+{
+	struct rte_mempool_handler *handler;
+
+	handler = rte_mempool_handler_get(mp->handler_idx);
+	return handler->get_count(mp->pool);
+}
+
+/* set the handler of a mempool */
+int
+rte_mempool_set_handler(struct rte_mempool *mp, const char *name)
+{
+	struct rte_mempool_handler *handler = NULL;
+	unsigned i;
+
+	/* too late, the mempool is already populated */
+	if (mp->flags & MEMPOOL_F_RING_CREATED)
+		return -EEXIST;
+
+	for (i = 0; i < rte_mempool_handler_table.num_handlers; i++) {
+		if (!strcmp(name, rte_mempool_handler_table.handler[i].name)) {
+			handler = &rte_mempool_handler_table.handler[i];
+			break;
+		}
+	}
+
+	if (handler == NULL)
+		return -EINVAL;
+
+	mp->handler_idx = i;
+	return 0;
+}
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 7d1f670..1ec9751 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -19,6 +19,8 @@ DPDK_2.0 {
 DPDK_16.7 {
 	global:
 
+	rte_mempool_handler_table;
+
 	rte_mempool_obj_iter;
 	rte_mempool_mem_iter;
 	rte_mempool_create_empty;
@@ -28,6 +30,8 @@ DPDK_16.7 {
 	rte_mempool_populate_default;
 	rte_mempool_populate_anon;
 	rte_mempool_free;
+	rte_mempool_set_handler;
+	rte_mempool_handler_register;
 
 	local: *;
 } DPDK_2.0;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v4 2/3] app/test: test external mempool handler
  2016-04-14 13:57     ` [PATCH v4 0/3] " Olivier Matz
  2016-04-14 13:57       ` [PATCH v4 1/3] mempool: support external handler Olivier Matz
@ 2016-04-14 13:57       ` Olivier Matz
  2016-04-14 13:57       ` [PATCH v4 3/3] mbuf: get default mempool handler from configuration Olivier Matz
  2016-05-19 13:44       ` mempool: external mempool manager David Hunt
  3 siblings, 0 replies; 238+ messages in thread
From: Olivier Matz @ 2016-04-14 13:57 UTC (permalink / raw)
  To: dev, david.hunt; +Cc: yuanhan.liu, pmatilai

Use a minimal custom mempool external handler and check that it also
passes basic mempool autotests.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test/test_mempool.c | 113 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 113 insertions(+)

diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index c96ed27..09951cc 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -85,6 +85,96 @@
 static rte_atomic32_t synchro;
 
 /*
+ * Simple example of custom mempool structure. Holds pointers to all the
+ * elements which are simply malloc'd in this example.
+ */
+struct custom_mempool {
+	rte_spinlock_t lock;
+	unsigned count;
+	unsigned size;
+	void *elts[];
+};
+
+/*
+ * Loop though all the element pointers and allocate a chunk of memory, then
+ * insert that memory into the ring.
+ */
+static void *
+custom_mempool_alloc(struct rte_mempool *mp)
+{
+	struct custom_mempool *cm;
+
+	cm = rte_zmalloc("custom_mempool",
+		sizeof(struct custom_mempool) + mp->size * sizeof(void *), 0);
+	if (cm == NULL)
+		return NULL;
+
+	rte_spinlock_init(&cm->lock);
+	cm->count = 0;
+	cm->size = mp->size;
+	return cm;
+}
+
+static void
+custom_mempool_free(void *p)
+{
+	rte_free(p);
+}
+
+static int
+custom_mempool_put(void *p, void * const *obj_table, unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)p;
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (cm->count + n > cm->size) {
+		ret = -ENOBUFS;
+	} else {
+		memcpy(&cm->elts[cm->count], obj_table, sizeof(void *) * n);
+		cm->count += n;
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+
+static int
+custom_mempool_get(void *p, void **obj_table, unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)p;
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (n > cm->count) {
+		ret = -ENOENT;
+	} else {
+		cm->count -= n;
+		memcpy(obj_table, &cm->elts[cm->count], sizeof(void *) * n);
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+static unsigned
+custom_mempool_get_count(void *p)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)p;
+	return cm->count;
+}
+
+static struct rte_mempool_handler mempool_handler_custom = {
+	.name = "custom_handler",
+	.alloc = custom_mempool_alloc,
+	.free = custom_mempool_free,
+	.put = custom_mempool_put,
+	.get = custom_mempool_get,
+	.get_count = custom_mempool_get_count,
+};
+
+MEMPOOL_REGISTER_HANDLER(mempool_handler_custom);
+
+/*
  * save the object number in the first 4 bytes of object data. All
  * other bytes are set to 0.
  */
@@ -479,6 +569,7 @@ test_mempool(void)
 {
 	struct rte_mempool *mp_cache = NULL;
 	struct rte_mempool *mp_nocache = NULL;
+	struct rte_mempool *mp_ext = NULL;
 
 	rte_atomic32_init(&synchro);
 
@@ -507,6 +598,27 @@ test_mempool(void)
 		goto err;
 	}
 
+	/* create a mempool with an external handler */
+	mp_ext = rte_mempool_create_empty("test_ext",
+		MEMPOOL_SIZE,
+		MEMPOOL_ELT_SIZE,
+		RTE_MEMPOOL_CACHE_MAX_SIZE, 0,
+		SOCKET_ID_ANY, 0);
+
+	if (mp_ext == NULL) {
+		printf("cannot allocate mp_ext mempool\n");
+		goto err;
+	}
+	if (rte_mempool_set_handler(mp_ext, "custom_handler") < 0) {
+		printf("cannot set custom handler\n");
+		goto err;
+	}
+	if (rte_mempool_populate_default(mp_ext) < 0) {
+		printf("cannot populate mp_ext mempool\n");
+		goto err;
+	}
+	rte_mempool_obj_iter(mp_ext, my_obj_init, NULL);
+
 	/* retrieve the mempool from its name */
 	if (rte_mempool_lookup("test_nocache") != mp_nocache) {
 		printf("Cannot lookup mempool from its name\n");
@@ -547,6 +659,7 @@ test_mempool(void)
 err:
 	rte_mempool_free(mp_nocache);
 	rte_mempool_free(mp_cache);
+	rte_mempool_free(mp_ext);
 	return -1;
 }
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v4 3/3] mbuf: get default mempool handler from configuration
  2016-04-14 13:57     ` [PATCH v4 0/3] " Olivier Matz
  2016-04-14 13:57       ` [PATCH v4 1/3] mempool: support external handler Olivier Matz
  2016-04-14 13:57       ` [PATCH v4 2/3] app/test: test external mempool handler Olivier Matz
@ 2016-04-14 13:57       ` Olivier Matz
  2016-05-19 13:44       ` mempool: external mempool manager David Hunt
  3 siblings, 0 replies; 238+ messages in thread
From: Olivier Matz @ 2016-04-14 13:57 UTC (permalink / raw)
  To: dev, david.hunt; +Cc: yuanhan.liu, pmatilai

From: David Hunt <david.hunt@intel.com>

By default, the mempool handler used for mbuf allocations is a multi
producer and multi consumer ring. We could imagine a target (maybe some
network processors?) that provides an hardware-assisted pool
mechanism. In this case, the default configuration for this architecture
would contain a different value for RTE_MBUF_DEFAULT_MEMPOOL_HANDLER.

Signed-off-by: David Hunt <david.hunt@intel.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 config/common_base         |  1 +
 lib/librte_mbuf/rte_mbuf.c | 21 +++++++++++++++++----
 2 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/config/common_base b/config/common_base
index 0124e86..178cb7e 100644
--- a/config/common_base
+++ b/config/common_base
@@ -390,6 +390,7 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
 #
 CONFIG_RTE_LIBRTE_MBUF=y
 CONFIG_RTE_LIBRTE_MBUF_DEBUG=n
+CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_HANDLER="ring_mp_mc"
 CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index dc0467c..a72f8f2 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -153,6 +153,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	unsigned cache_size, uint16_t priv_size, uint16_t data_room_size,
 	int socket_id)
 {
+	struct rte_mempool *mp;
 	struct rte_pktmbuf_pool_private mbp_priv;
 	unsigned elt_size;
 
@@ -167,10 +168,22 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	mbp_priv.mbuf_data_room_size = data_room_size;
 	mbp_priv.mbuf_priv_size = priv_size;
 
-	return rte_mempool_create(name, n, elt_size,
-		cache_size, sizeof(struct rte_pktmbuf_pool_private),
-		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
-		socket_id, 0);
+	mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
+		 sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
+	if (mp == NULL)
+		return NULL;
+
+	rte_mempool_set_handler(mp, RTE_MBUF_DEFAULT_MEMPOOL_HANDLER);
+	rte_pktmbuf_pool_init(mp, &mbp_priv);
+
+	if (rte_mempool_populate_default(mp) < 0) {
+		rte_mempool_free(mp);
+		return NULL;
+	}
+
+	rte_mempool_obj_iter(mp, rte_pktmbuf_init, NULL);
+
+	return mp;
 }
 
 /* do some sanity checks on a mbuf: panic if it fails */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* mempool: external mempool manager
  2016-04-14 13:57     ` [PATCH v4 0/3] " Olivier Matz
                         ` (2 preceding siblings ...)
  2016-04-14 13:57       ` [PATCH v4 3/3] mbuf: get default mempool handler from configuration Olivier Matz
@ 2016-05-19 13:44       ` David Hunt
  2016-05-19 13:44         ` [PATCH v5 1/3] mempool: support external handler David Hunt
                           ` (3 more replies)
  3 siblings, 4 replies; 238+ messages in thread
From: David Hunt @ 2016-05-19 13:44 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, yuanhan.liu, pmatilai


Here's the latest version of the External Mempool Manager patchset.
It's re-based on top of the latest head as of 19/5/2016, including 
Olivier's 35-part patch series on mempool re-org [1]

[1] http://dpdk.org/ml/archives/dev/2016-May/039229.html

v5 changes:
 * rebasing, as it is dependent on another patch series [1]

v4 changes (Olivier Matz):
 * remove the rte_mempool_create_ext() function. To change the handler, the
   user has to do the following:
   - mp = rte_mempool_create_empty()
   - rte_mempool_set_handler(mp, "my_handler")
   - rte_mempool_populate_default(mp)
   This avoids to add another function with more than 10 arguments, duplicating
   the doxygen comments
 * change the api of rte_mempool_alloc_t: only the mempool pointer is required
   as all information is available in it
 * change the api of rte_mempool_free_t: remove return value
 * move inline wrapper functions from the .c to the .h (else they won't be
   inlined). This implies to have one header file (rte_mempool.h), or it
   would have generate cross dependencies issues.
 * remove now unused MEMPOOL_F_INT_HANDLER (note: it was misused anyway due
   to the use of && instead of &)
 * fix build in debug mode (__MEMPOOL_STAT_ADD(mp, put_pool, n) remaining)
 * fix build with shared libraries (global handler has to be declared in
   the .map file)
 * rationalize #include order
 * remove unused function rte_mempool_get_handler_name()
 * rename some structures, fields, functions
 * remove the static in front of rte_tailq_elem rte_mempool_tailq (comment
   from Yuanhan)
 * test the ext mempool handler in the same file than standard mempool tests,
   avoiding to duplicate the code
 * rework the custom handler in mempool_test
 * rework a bit the patch selecting default mbuf pool handler
 * fix some doxygen comments

v3 changes:
 * simplified the file layout, renamed to rte_mempool_handler.[hc]
 * moved the default handlers into rte_mempool_default.c
 * moved the example handler out into app/test/test_ext_mempool.c
 * removed is_mc/is_mp change, slight perf degredation on sp cached operation
 * removed stack hanler, may re-introduce at a later date
 * Changes out of code reviews

v2 changes:
 * There was a lot of duplicate code between rte_mempool_xmem_create and
   rte_mempool_create_ext. This has now been refactored and is now
   hopefully cleaner.
 * The RTE_NEXT_ABI define is now used to allow building of the library
   in a format that is compatible with binaries built against previous
   versions of DPDK.
 * Changes out of code reviews. Hopefully I've got most of them included.

The External Mempool Manager is an extension to the mempool API that allows
users to add and use an external mempool manager, which allows external memory
subsystems such as external hardware memory management systems and software
based memory allocators to be used with DPDK.

The existing API to the internal DPDK mempool manager will remain unchanged
and will be backward compatible. However, there will be an ABI breakage, as
the mempool struct is changing. These changes are all contained withing
RTE_NEXT_ABI defs, and the current or next code can be changed with
the CONFIG_RTE_NEXT_ABI config setting

There are two aspects to external mempool manager.
  1. Adding the code for your new mempool handler. This is achieved by adding a
     new mempool handler source file into the librte_mempool library, and
     using the REGISTER_MEMPOOL_HANDLER macro.
  2. Using the new API to call rte_mempool_create_empty and
     rte_mempool_set_handler to create a new mempool
     using the name parameter to identify which handler to use.

New API calls added
 1. A new rte_mempool_create_empty() function
 2. rte_mempool_set_handler() which sets the mempool's handler
 3. An rte_mempool_populate_default() and rte_mempool_populate_anon() functions
    which populates the mempool using the relevant handler

Several external mempool managers may be used in the same application. A new
mempool can then be created by using the new 'create' function, providing the
mempool handler name to point the mempool to the relevant mempool manager
callback structure.

The old 'create' function can still be called by legacy programs, and will
internally work out the mempool handle based on the flags provided (single
producer, single consumer, etc). By default handles are created internally to
implement the built-in DPDK mempool manager and mempool types.

The external mempool manager needs to provide the following functions.
 1. alloc     - allocates the mempool memory, and adds each object onto a ring
 2. put       - puts an object back into the mempool once an application has
                finished with it
 3. get       - gets an object from the mempool for use by the application
 4. get_count - gets the number of available objects in the mempool
 5. free      - frees the mempool memory

Every time a get/put/get_count is called from the application/PMD, the
callback for that mempool is called. These functions are in the fastpath,
and any unoptimised handlers may limit performance.

The new APIs are as follows:

1. rte_mempool_create_empty

struct rte_mempool *
rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
    unsigned cache_size, unsigned private_data_size,
    int socket_id, unsigned flags);

2. rte_mempool_set_handler()

int
rte_mempool_set_handler(struct rte_mempool *mp, const char *name);

3. rte_mempool_populate_default()

int rte_mempool_populate_default(struct rte_mempool *mp);

4. rte_mempool_populate_anon()

int rte_mempool_populate_anon(struct rte_mempool *mp);

Please see rte_mempool.h for further information on the parameters.


The important thing to note is that the mempool handler is passed by name
to rte_mempool_set_handler, which looks through the handler array to
get the handler index, which is then stored in the rte_memool structure. This
allow multiple processes to use the same mempool, as the function pointers
are accessed via handler index.

The mempool handler structure contains callbacks to the implementation of
the handler, and is set up for registration as follows:

static const struct rte_mempool_handler handler_sp_mc = {
    .name = "ring_sp_mc",
    .alloc = rte_mempool_common_ring_alloc,
    .put = common_ring_sp_put,
    .get = common_ring_mc_get,
    .get_count = common_ring_get_count,
    .free = common_ring_free,
};

And then the following macro will register the handler in the array of handlers

REGISTER_MEMPOOL_HANDLER(handler_mp_mc);

For and example of a simple malloc based mempool manager, see
lib/librte_mempool/custom_mempool.c

For an example of API usage, please see app/test/test_mempool.c, which
implements a rudimentary "custom_handler" mempool manager using simple mallocs
for each mempool object. This file also contains the callbacks and self
registration for the new handler.

David Hunt (2):
  mempool: support external handler
  mbuf: get default mempool handler from configuration

Olivier Matz (1):
  app/test: test external mempool handler

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v5 1/3] mempool: support external handler
  2016-05-19 13:44       ` mempool: external mempool manager David Hunt
@ 2016-05-19 13:44         ` David Hunt
  2016-05-23 12:35           ` [dpdk-dev,v5,1/3] " Jan Viktorin
  2016-05-24 15:35           ` [PATCH v5 1/3] " Jerin Jacob
  2016-05-19 13:45         ` [PATCH v5 2/3] app/test: test external mempool handler David Hunt
                           ` (2 subsequent siblings)
  3 siblings, 2 replies; 238+ messages in thread
From: David Hunt @ 2016-05-19 13:44 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, yuanhan.liu, pmatilai, David Hunt

Until now, the objects stored in mempool mempool were internally stored a
ring. This patch introduce the possibility to register external handlers
replacing the ring.

The default behavior remains unchanged, but calling the new function
rte_mempool_set_handler() right after rte_mempool_create_empty() allows to
change the handler that will be used when populating the mempool.

v5 changes: rebasing on top of 35 patch set mempool work.

Signed-off-by: David Hunt <david.hunt@intel.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test/test_mempool_perf.c               |   1 -
 lib/librte_mempool/Makefile                |   2 +
 lib/librte_mempool/rte_mempool.c           |  73 ++++------
 lib/librte_mempool/rte_mempool.h           | 212 +++++++++++++++++++++++++----
 lib/librte_mempool/rte_mempool_default.c   | 147 ++++++++++++++++++++
 lib/librte_mempool/rte_mempool_handler.c   | 139 +++++++++++++++++++
 lib/librte_mempool/rte_mempool_version.map |   4 +
 7 files changed, 506 insertions(+), 72 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_default.c
 create mode 100644 lib/librte_mempool/rte_mempool_handler.c

diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index cdc02a0..091c1df 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
 							   n_get_bulk);
 				if (unlikely(ret < 0)) {
 					rte_mempool_dump(stdout, mp);
-					rte_ring_dump(stdout, mp->ring);
 					/* in this case, objects are lost... */
 					return -1;
 				}
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 43423e0..f19366e 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -42,6 +42,8 @@ LIBABIVER := 2
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_handler.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
 # install includes
 SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
 
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 1ab6701..6ec2b3f 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -148,7 +148,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, phys_addr_t physaddr)
 #endif
 
 	/* enqueue in ring */
-	rte_ring_sp_enqueue(mp->ring, obj);
+	rte_mempool_ext_put_bulk(mp, &obj, 1);
 }
 
 /* call obj_cb() for each mempool element */
@@ -300,40 +300,6 @@ rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	return (size_t)paddr_idx << pg_shift;
 }
 
-/* create the internal ring */
-static int
-rte_mempool_ring_create(struct rte_mempool *mp)
-{
-	int rg_flags = 0, ret;
-	char rg_name[RTE_RING_NAMESIZE];
-	struct rte_ring *r;
-
-	ret = snprintf(rg_name, sizeof(rg_name),
-		RTE_MEMPOOL_MZ_FORMAT, mp->name);
-	if (ret < 0 || ret >= (int)sizeof(rg_name))
-		return -ENAMETOOLONG;
-
-	/* ring flags */
-	if (mp->flags & MEMPOOL_F_SP_PUT)
-		rg_flags |= RING_F_SP_ENQ;
-	if (mp->flags & MEMPOOL_F_SC_GET)
-		rg_flags |= RING_F_SC_DEQ;
-
-	/* Allocate the ring that will be used to store objects.
-	 * Ring functions will return appropriate errors if we are
-	 * running as a secondary process etc., so no checks made
-	 * in this function for that condition.
-	 */
-	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
-		mp->socket_id, rg_flags);
-	if (r == NULL)
-		return -rte_errno;
-
-	mp->ring = r;
-	mp->flags |= MEMPOOL_F_RING_CREATED;
-	return 0;
-}
-
 /* free a memchunk allocated with rte_memzone_reserve() */
 static void
 rte_mempool_memchunk_mz_free(__rte_unused struct rte_mempool_memhdr *memhdr,
@@ -351,7 +317,7 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
 	void *elt;
 
 	while (!STAILQ_EMPTY(&mp->elt_list)) {
-		rte_ring_sc_dequeue(mp->ring, &elt);
+		rte_mempool_ext_get_bulk(mp, &elt, 1);
 		(void)elt;
 		STAILQ_REMOVE_HEAD(&mp->elt_list, next);
 		mp->populated_size--;
@@ -380,15 +346,18 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	unsigned i = 0;
 	size_t off;
 	struct rte_mempool_memhdr *memhdr;
-	int ret;
 
 	/* create the internal ring if not already done */
 	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
-		ret = rte_mempool_ring_create(mp);
-		if (ret < 0)
-			return ret;
+		rte_errno = 0;
+		mp->pool = rte_mempool_ext_alloc(mp);
+		if (mp->pool == NULL) {
+			if (rte_errno == 0)
+				return -EINVAL;
+			else
+				return -rte_errno;
+		}
 	}
-
 	/* mempool is already populated */
 	if (mp->populated_size >= mp->size)
 		return -ENOSPC;
@@ -700,7 +669,7 @@ rte_mempool_free(struct rte_mempool *mp)
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
 
 	rte_mempool_free_memchunks(mp);
-	rte_ring_free(mp->ring);
+	rte_mempool_ext_free(mp);
 	rte_memzone_free(mp->mz);
 }
 
@@ -812,6 +781,20 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 		RTE_PTR_ADD(mp, MEMPOOL_HEADER_SIZE(mp, 0));
 
 	te->data = mp;
+
+	/*
+	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
+	 * set the correct index into the handler table.
+	 */
+	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
+		rte_mempool_set_handler(mp, "ring_sp_sc");
+	else if (flags & MEMPOOL_F_SP_PUT)
+		rte_mempool_set_handler(mp, "ring_sp_mc");
+	else if (flags & MEMPOOL_F_SC_GET)
+		rte_mempool_set_handler(mp, "ring_mp_sc");
+	else
+		rte_mempool_set_handler(mp, "ring_mp_mc");
+
 	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
 	TAILQ_INSERT_TAIL(mempool_list, te, next);
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
@@ -927,7 +910,7 @@ rte_mempool_count(const struct rte_mempool *mp)
 	unsigned count;
 	unsigned lcore_id;
 
-	count = rte_ring_count(mp->ring);
+	count = rte_mempool_ext_get_count(mp);
 
 	if (mp->cache_size == 0)
 		return count;
@@ -1120,7 +1103,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 
 	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
 	fprintf(f, "  flags=%x\n", mp->flags);
-	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
+	fprintf(f, "  pool=%p\n", mp->pool);
 	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->mz->phys_addr);
 	fprintf(f, "  nb_mem_chunks=%u\n", mp->nb_mem_chunks);
 	fprintf(f, "  size=%"PRIu32"\n", mp->size);
@@ -1141,7 +1124,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 	}
 
 	cache_count = rte_mempool_dump_cache(f, mp);
-	common_count = rte_ring_count(mp->ring);
+	common_count = rte_mempool_ext_get_count(mp);
 	if ((cache_count + common_count) > mp->size)
 		common_count = mp->size - cache_count;
 	fprintf(f, "  common_pool_count=%u\n", common_count);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 60339bd..ed2c110 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -67,6 +67,7 @@
 #include <inttypes.h>
 #include <sys/queue.h>
 
+#include <rte_spinlock.h>
 #include <rte_log.h>
 #include <rte_debug.h>
 #include <rte_lcore.h>
@@ -203,7 +204,15 @@ struct rte_mempool_memhdr {
  */
 struct rte_mempool {
 	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
-	struct rte_ring *ring;           /**< Ring to store objects. */
+	void *pool;                      /**< Ring or ext-pool to store objects. */
+	/**
+	 * Index into the array of structs containing callback fn pointers.
+	 * We're using an index here rather than pointers to the callbacks
+	 * to facilitate any secondary processes that may want to use
+	 * this mempool. Any function pointers stored in the mempool
+	 * directly would not be valid for secondary processes.
+	 */
+	int32_t handler_idx;
 	const struct rte_memzone *mz;    /**< Memzone where pool is allocated */
 	int flags;                       /**< Flags of the mempool. */
 	int socket_id;                   /**< Socket id passed at mempool creation. */
@@ -325,6 +334,175 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 #define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
 #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
 
+#define RTE_MEMPOOL_HANDLER_NAMESIZE 32 /**< Max length of handler name. */
+
+/** Allocate the external pool. */
+typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp);
+
+/** Free the external pool. */
+typedef void (*rte_mempool_free_t)(void *p);
+
+/** Put an object in the external pool. */
+typedef int (*rte_mempool_put_t)(void *p, void * const *obj_table, unsigned n);
+
+/** Get an object from the external pool. */
+typedef int (*rte_mempool_get_t)(void *p, void **obj_table, unsigned n);
+
+/** Return the number of available objects in the external pool. */
+typedef unsigned (*rte_mempool_get_count)(void *p);
+
+/** Structure defining a mempool handler. */
+struct rte_mempool_handler {
+	char name[RTE_MEMPOOL_HANDLER_NAMESIZE]; /**< Name of mempool handler */
+	rte_mempool_alloc_t alloc;       /**< Allocate the external pool. */
+	rte_mempool_free_t free;         /**< Free the external pool. */
+	rte_mempool_put_t put;           /**< Put an object. */
+	rte_mempool_get_t get;           /**< Get an object. */
+	rte_mempool_get_count get_count; /**< Get the number of available objs. */
+} __rte_cache_aligned;
+
+#define RTE_MEMPOOL_MAX_HANDLER_IDX 16  /**< Max number of registered handlers */
+
+/** Structure storing the table of registered handlers. */
+struct rte_mempool_handler_table {
+	rte_spinlock_t sl;     /**< Spinlock for add/delete. */
+	uint32_t num_handlers; /**< Number of handlers in the table. */
+	/** Storage for all possible handlers. */
+	struct rte_mempool_handler handler[RTE_MEMPOOL_MAX_HANDLER_IDX];
+};
+
+/** Array of registered handlers */
+extern struct rte_mempool_handler_table rte_mempool_handler_table;
+
+/**
+ * @internal Get the mempool handler from its index.
+ *
+ * @param handler_idx
+ *   The index of the handler in the handler table. It must be a valid
+ *   index: (0 <= idx < num_handlers).
+ * @return
+ *   The pointer to the handler in the table.
+ */
+static struct rte_mempool_handler *
+rte_mempool_handler_get(int handler_idx)
+{
+	return &rte_mempool_handler_table.handler[handler_idx];
+}
+
+/**
+ * @internal wrapper for external mempool manager alloc callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   The opaque pointer to the external pool.
+ */
+void *
+rte_mempool_ext_alloc(struct rte_mempool *mp);
+
+/**
+ * @internal wrapper for external mempool manager get callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to get.
+ * @return
+ *   - 0: Success; got n objects.
+ *   - <0: Error; code of handler get function.
+ */
+static inline int
+rte_mempool_ext_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	struct rte_mempool_handler *handler;
+
+	handler = rte_mempool_handler_get(mp->handler_idx);
+	return handler->get(mp->pool, obj_table, n);
+}
+
+/**
+ * @internal wrapper for external mempool manager put callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to put.
+ * @return
+ *   - 0: Success; n objects supplied.
+ *   - <0: Error; code of handler put function.
+ */
+static inline int
+rte_mempool_ext_put_bulk(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	struct rte_mempool_handler *handler;
+
+	handler = rte_mempool_handler_get(mp->handler_idx);
+	return handler->put(mp->pool, obj_table, n);
+}
+
+/**
+ * @internal wrapper for external mempool manager get_count callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   The number of available objects in the external pool.
+ */
+unsigned
+rte_mempool_ext_get_count(const struct rte_mempool *mp);
+
+/**
+ * @internal wrapper for external mempool manager free callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ */
+void
+rte_mempool_ext_free(struct rte_mempool *mp);
+
+/**
+ * Set the handler of a mempool
+ *
+ * This can only be done on a mempool that is not populated, i.e. just after
+ * a call to rte_mempool_create_empty().
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param name
+ *   Name of the handler.
+ * @return
+ *   - 0: Sucess; the new handler is configured.
+ *   - <0: Error (errno)
+ */
+int
+rte_mempool_set_handler(struct rte_mempool *mp, const char *name);
+
+/**
+ * Register an external pool handler.
+ *
+ * @param h
+ *   Pointer to the external pool handler
+ * @return
+ *   - >=0: Sucess; return the index of the handler in the table.
+ *   - <0: Error (errno)
+ */
+int rte_mempool_handler_register(struct rte_mempool_handler *h);
+
+/**
+ * Macro to statically register an external pool handler.
+ */
+#define MEMPOOL_REGISTER_HANDLER(h)					\
+	void mp_hdlr_init_##h(void);					\
+	void __attribute__((constructor, used)) mp_hdlr_init_##h(void)	\
+	{								\
+		rte_mempool_handler_register(&h);			\
+	}
+
 /**
  * An object callback function for mempool.
  *
@@ -736,7 +914,7 @@ void rte_mempool_dump(FILE *f, struct rte_mempool *mp);
  */
 static inline void __attribute__((always_inline))
 __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
-		    unsigned n, int is_mp)
+		    unsigned n, __rte_unused int is_mp)
 {
 	struct rte_mempool_cache *cache;
 	uint32_t index;
@@ -774,7 +952,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 	cache->len += n;
 
 	if (cache->len >= flushthresh) {
-		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
+		rte_mempool_ext_put_bulk(mp, &cache->objs[cache_size],
 				cache->len - cache_size);
 		cache->len = cache_size;
 	}
@@ -782,26 +960,10 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 	return;
 
 ring_enqueue:
-
 	/* push remaining objects in ring */
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	if (is_mp) {
-		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
-	else {
-		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
-#else
-	if (is_mp)
-		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
-	else
-		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
-#endif
+	rte_mempool_ext_put_bulk(mp, obj_table, n);
 }
 
-
 /**
  * Put several objects back in the mempool (multi-producers safe).
  *
@@ -922,7 +1084,7 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
  */
 static inline int __attribute__((always_inline))
 __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
-		   unsigned n, int is_mc)
+		   unsigned n, __rte_unused int is_mc)
 {
 	int ret;
 	struct rte_mempool_cache *cache;
@@ -945,7 +1107,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 		uint32_t req = n + (cache_size - cache->len);
 
 		/* How many do we require i.e. number to fill the cache + the request */
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
+		ret = rte_mempool_ext_get_bulk(mp,
+			&cache->objs[cache->len], req);
 		if (unlikely(ret < 0)) {
 			/*
 			 * In the offchance that we are buffer constrained,
@@ -972,10 +1135,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 ring_dequeue:
 
 	/* get remaining objects from ring */
-	if (is_mc)
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
-	else
-		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
+	ret = rte_mempool_ext_get_bulk(mp, obj_table, n);
 
 	if (ret < 0)
 		__MEMPOOL_STAT_ADD(mp, get_fail, n);
diff --git a/lib/librte_mempool/rte_mempool_default.c b/lib/librte_mempool/rte_mempool_default.c
new file mode 100644
index 0000000..a6ac65a
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_default.c
@@ -0,0 +1,147 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+
+static int
+common_ring_mp_put(void *p, void * const *obj_table, unsigned n)
+{
+	return rte_ring_mp_enqueue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static int
+common_ring_sp_put(void *p, void * const *obj_table, unsigned n)
+{
+	return rte_ring_sp_enqueue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static int
+common_ring_mc_get(void *p, void **obj_table, unsigned n)
+{
+	return rte_ring_mc_dequeue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static int
+common_ring_sc_get(void *p, void **obj_table, unsigned n)
+{
+	return rte_ring_sc_dequeue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static unsigned
+common_ring_get_count(void *p)
+{
+	return rte_ring_count((struct rte_ring *)p);
+}
+
+
+static void *
+common_ring_alloc(struct rte_mempool *mp)
+{
+	int rg_flags = 0, ret;
+	char rg_name[RTE_RING_NAMESIZE];
+	struct rte_ring *r;
+
+	ret = snprintf(rg_name, sizeof(rg_name),
+		RTE_MEMPOOL_MZ_FORMAT, mp->name);
+	if (ret < 0 || ret >= (int)sizeof(rg_name)) {
+		rte_errno = ENAMETOOLONG;
+		return NULL;
+	}
+
+	/* ring flags */
+	if (mp->flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (mp->flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	/* Allocate the ring that will be used to store objects.
+	 * Ring functions will return appropriate errors if we are
+	 * running as a secondary process etc., so no checks made
+	 * in this function for that condition. */
+	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+		mp->socket_id, rg_flags);
+
+	return r;
+}
+
+static void
+common_ring_free(void *p)
+{
+	rte_ring_free((struct rte_ring *)p);
+}
+
+static struct rte_mempool_handler handler_mp_mc = {
+	.name = "ring_mp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.put = common_ring_mp_put,
+	.get = common_ring_mc_get,
+	.get_count = common_ring_get_count,
+};
+
+static struct rte_mempool_handler handler_sp_sc = {
+	.name = "ring_sp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.put = common_ring_sp_put,
+	.get = common_ring_sc_get,
+	.get_count = common_ring_get_count,
+};
+
+static struct rte_mempool_handler handler_mp_sc = {
+	.name = "ring_mp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.put = common_ring_mp_put,
+	.get = common_ring_sc_get,
+	.get_count = common_ring_get_count,
+};
+
+static struct rte_mempool_handler handler_sp_mc = {
+	.name = "ring_sp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.put = common_ring_sp_put,
+	.get = common_ring_mc_get,
+	.get_count = common_ring_get_count,
+};
+
+MEMPOOL_REGISTER_HANDLER(handler_mp_mc);
+MEMPOOL_REGISTER_HANDLER(handler_sp_sc);
+MEMPOOL_REGISTER_HANDLER(handler_mp_sc);
+MEMPOOL_REGISTER_HANDLER(handler_sp_mc);
diff --git a/lib/librte_mempool/rte_mempool_handler.c b/lib/librte_mempool/rte_mempool_handler.c
new file mode 100644
index 0000000..78611f8
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_handler.c
@@ -0,0 +1,139 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2016 6WIND S.A.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_mempool.h>
+
+/* indirect jump table to support external memory pools */
+struct rte_mempool_handler_table rte_mempool_handler_table = {
+	.sl =  RTE_SPINLOCK_INITIALIZER ,
+	.num_handlers = 0
+};
+
+/* add a new handler in rte_mempool_handler_table, return its index */
+int
+rte_mempool_handler_register(struct rte_mempool_handler *h)
+{
+	struct rte_mempool_handler *handler;
+	int16_t handler_idx;
+
+	rte_spinlock_lock(&rte_mempool_handler_table.sl);
+
+	if (rte_mempool_handler_table.num_handlers >= RTE_MEMPOOL_MAX_HANDLER_IDX) {
+		rte_spinlock_unlock(&rte_mempool_handler_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Maximum number of mempool handlers exceeded\n");
+		return -ENOSPC;
+	}
+
+	if (h->put == NULL || h->get == NULL || h->get_count == NULL) {
+		rte_spinlock_unlock(&rte_mempool_handler_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Missing callback while registering mempool handler\n");
+		return -EINVAL;
+	}
+
+	handler_idx = rte_mempool_handler_table.num_handlers++;
+	handler = &rte_mempool_handler_table.handler[handler_idx];
+	snprintf(handler->name, sizeof(handler->name), "%s", h->name);
+	handler->alloc = h->alloc;
+	handler->put = h->put;
+	handler->get = h->get;
+	handler->get_count = h->get_count;
+
+	rte_spinlock_unlock(&rte_mempool_handler_table.sl);
+
+	return handler_idx;
+}
+
+/* wrapper to allocate an external pool handler */
+void *
+rte_mempool_ext_alloc(struct rte_mempool *mp)
+{
+	struct rte_mempool_handler *handler;
+
+	handler = rte_mempool_handler_get(mp->handler_idx);
+	if (handler->alloc == NULL)
+		return NULL;
+	return handler->alloc(mp);
+}
+
+/* wrapper to free an external pool handler */
+void
+rte_mempool_ext_free(struct rte_mempool *mp)
+{
+	struct rte_mempool_handler *handler;
+
+	handler = rte_mempool_handler_get(mp->handler_idx);
+	if (handler->free == NULL)
+		return;
+	return handler->free(mp);
+}
+
+/* wrapper to get available objects in an external pool handler */
+unsigned
+rte_mempool_ext_get_count(const struct rte_mempool *mp)
+{
+	struct rte_mempool_handler *handler;
+
+	handler = rte_mempool_handler_get(mp->handler_idx);
+	return handler->get_count(mp->pool);
+}
+
+/* set the handler of a mempool */
+int
+rte_mempool_set_handler(struct rte_mempool *mp, const char *name)
+{
+	struct rte_mempool_handler *handler = NULL;
+	unsigned i;
+
+	/* too late, the mempool is already populated */
+	if (mp->flags & MEMPOOL_F_RING_CREATED)
+		return -EEXIST;
+
+	for (i = 0; i < rte_mempool_handler_table.num_handlers; i++) {
+		if (!strcmp(name, rte_mempool_handler_table.handler[i].name)) {
+			handler = &rte_mempool_handler_table.handler[i];
+			break;
+		}
+	}
+
+	if (handler == NULL)
+		return -EINVAL;
+
+	mp->handler_idx = i;
+	return 0;
+}
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index f63461b..a0e9aed 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -19,6 +19,8 @@ DPDK_2.0 {
 DPDK_16.7 {
 	global:
 
+	rte_mempool_handler_table;
+
 	rte_mempool_check_cookies;
 	rte_mempool_obj_iter;
 	rte_mempool_mem_iter;
@@ -29,6 +31,8 @@ DPDK_16.7 {
 	rte_mempool_populate_default;
 	rte_mempool_populate_anon;
 	rte_mempool_free;
+	rte_mempool_set_handler;
+	rte_mempool_handler_register;
 
 	local: *;
 } DPDK_2.0;
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v5 2/3] app/test: test external mempool handler
  2016-05-19 13:44       ` mempool: external mempool manager David Hunt
  2016-05-19 13:44         ` [PATCH v5 1/3] mempool: support external handler David Hunt
@ 2016-05-19 13:45         ` David Hunt
  2016-05-23 12:45           ` [dpdk-dev, v5, " Jan Viktorin
  2016-05-19 13:45         ` [PATCH v5 3/3] mbuf: get default mempool handler from configuration David Hunt
  2016-06-01 16:19         ` [PATCH v6 0/5] mempool: add external mempool manager David Hunt
  3 siblings, 1 reply; 238+ messages in thread
From: David Hunt @ 2016-05-19 13:45 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, yuanhan.liu, pmatilai, David Hunt

Use a minimal custom mempool external handler and check that it also
passes basic mempool autotests.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
 app/test/test_mempool.c | 113 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 113 insertions(+)

diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index 9f02758..f55d126 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -85,6 +85,96 @@
 static rte_atomic32_t synchro;
 
 /*
+ * Simple example of custom mempool structure. Holds pointers to all the
+ * elements which are simply malloc'd in this example.
+ */
+struct custom_mempool {
+	rte_spinlock_t lock;
+	unsigned count;
+	unsigned size;
+	void *elts[];
+};
+
+/*
+ * Loop though all the element pointers and allocate a chunk of memory, then
+ * insert that memory into the ring.
+ */
+static void *
+custom_mempool_alloc(struct rte_mempool *mp)
+{
+	struct custom_mempool *cm;
+
+	cm = rte_zmalloc("custom_mempool",
+		sizeof(struct custom_mempool) + mp->size * sizeof(void *), 0);
+	if (cm == NULL)
+		return NULL;
+
+	rte_spinlock_init(&cm->lock);
+	cm->count = 0;
+	cm->size = mp->size;
+	return cm;
+}
+
+static void
+custom_mempool_free(void *p)
+{
+	rte_free(p);
+}
+
+static int
+custom_mempool_put(void *p, void * const *obj_table, unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)p;
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (cm->count + n > cm->size) {
+		ret = -ENOBUFS;
+	} else {
+		memcpy(&cm->elts[cm->count], obj_table, sizeof(void *) * n);
+		cm->count += n;
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+
+static int
+custom_mempool_get(void *p, void **obj_table, unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)p;
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (n > cm->count) {
+		ret = -ENOENT;
+	} else {
+		cm->count -= n;
+		memcpy(obj_table, &cm->elts[cm->count], sizeof(void *) * n);
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+static unsigned
+custom_mempool_get_count(void *p)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)p;
+	return cm->count;
+}
+
+static struct rte_mempool_handler mempool_handler_custom = {
+	.name = "custom_handler",
+	.alloc = custom_mempool_alloc,
+	.free = custom_mempool_free,
+	.put = custom_mempool_put,
+	.get = custom_mempool_get,
+	.get_count = custom_mempool_get_count,
+};
+
+MEMPOOL_REGISTER_HANDLER(mempool_handler_custom);
+
+/*
  * save the object number in the first 4 bytes of object data. All
  * other bytes are set to 0.
  */
@@ -479,6 +569,7 @@ test_mempool(void)
 {
 	struct rte_mempool *mp_cache = NULL;
 	struct rte_mempool *mp_nocache = NULL;
+	struct rte_mempool *mp_ext = NULL;
 
 	rte_atomic32_init(&synchro);
 
@@ -507,6 +598,27 @@ test_mempool(void)
 		goto err;
 	}
 
+	/* create a mempool with an external handler */
+	mp_ext = rte_mempool_create_empty("test_ext",
+		MEMPOOL_SIZE,
+		MEMPOOL_ELT_SIZE,
+		RTE_MEMPOOL_CACHE_MAX_SIZE, 0,
+		SOCKET_ID_ANY, 0);
+
+	if (mp_ext == NULL) {
+		printf("cannot allocate mp_ext mempool\n");
+		goto err;
+	}
+	if (rte_mempool_set_handler(mp_ext, "custom_handler") < 0) {
+		printf("cannot set custom handler\n");
+		goto err;
+	}
+	if (rte_mempool_populate_default(mp_ext) < 0) {
+		printf("cannot populate mp_ext mempool\n");
+		goto err;
+	}
+	rte_mempool_obj_iter(mp_ext, my_obj_init, NULL);
+
 	/* retrieve the mempool from its name */
 	if (rte_mempool_lookup("test_nocache") != mp_nocache) {
 		printf("Cannot lookup mempool from its name\n");
@@ -547,6 +659,7 @@ test_mempool(void)
 err:
 	rte_mempool_free(mp_nocache);
 	rte_mempool_free(mp_cache);
+	rte_mempool_free(mp_ext);
 	return -1;
 }
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v5 3/3] mbuf: get default mempool handler from configuration
  2016-05-19 13:44       ` mempool: external mempool manager David Hunt
  2016-05-19 13:44         ` [PATCH v5 1/3] mempool: support external handler David Hunt
  2016-05-19 13:45         ` [PATCH v5 2/3] app/test: test external mempool handler David Hunt
@ 2016-05-19 13:45         ` David Hunt
  2016-05-23 12:40           ` [dpdk-dev, v5, " Jan Viktorin
  2016-06-01 16:19         ` [PATCH v6 0/5] mempool: add external mempool manager David Hunt
  3 siblings, 1 reply; 238+ messages in thread
From: David Hunt @ 2016-05-19 13:45 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, yuanhan.liu, pmatilai, David Hunt

By default, the mempool handler used for mbuf allocations is a multi
producer and multi consumer ring. We could imagine a target (maybe some
network processors?) that provides an hardware-assisted pool
mechanism. In this case, the default configuration for this architecture
would contain a different value for RTE_MBUF_DEFAULT_MEMPOOL_HANDLER.

Signed-off-by: David Hunt <david.hunt@intel.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 config/common_base         |  1 +
 lib/librte_mbuf/rte_mbuf.c | 21 +++++++++++++++++----
 2 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/config/common_base b/config/common_base
index 3535c6e..5cf5e52 100644
--- a/config/common_base
+++ b/config/common_base
@@ -394,6 +394,7 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
 #
 CONFIG_RTE_LIBRTE_MBUF=y
 CONFIG_RTE_LIBRTE_MBUF_DEBUG=n
+CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_HANDLER="ring_mp_mc"
 CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index eec1456..5dcdc05 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -153,6 +153,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	unsigned cache_size, uint16_t priv_size, uint16_t data_room_size,
 	int socket_id)
 {
+	struct rte_mempool *mp;
 	struct rte_pktmbuf_pool_private mbp_priv;
 	unsigned elt_size;
 
@@ -167,10 +168,22 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	mbp_priv.mbuf_data_room_size = data_room_size;
 	mbp_priv.mbuf_priv_size = priv_size;
 
-	return rte_mempool_create(name, n, elt_size,
-		cache_size, sizeof(struct rte_pktmbuf_pool_private),
-		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
-		socket_id, 0);
+	mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
+		 sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
+	if (mp == NULL)
+		return NULL;
+
+	rte_mempool_set_handler(mp, RTE_MBUF_DEFAULT_MEMPOOL_HANDLER);
+	rte_pktmbuf_pool_init(mp, &mbp_priv);
+
+	if (rte_mempool_populate_default(mp) < 0) {
+		rte_mempool_free(mp);
+		return NULL;
+	}
+
+	rte_mempool_obj_iter(mp, rte_pktmbuf_init, NULL);
+
+	return mp;
 }
 
 /* do some sanity checks on a mbuf: panic if it fails */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* Re: [dpdk-dev,v5,1/3] mempool: support external handler
  2016-05-19 13:44         ` [PATCH v5 1/3] mempool: support external handler David Hunt
@ 2016-05-23 12:35           ` Jan Viktorin
  2016-05-24 14:04             ` Hunt, David
  2016-05-31  9:09             ` Hunt, David
  2016-05-24 15:35           ` [PATCH v5 1/3] " Jerin Jacob
  1 sibling, 2 replies; 238+ messages in thread
From: Jan Viktorin @ 2016-05-23 12:35 UTC (permalink / raw)
  To: David Hunt; +Cc: dev, olivier.matz, yuanhan.liu, pmatilai

Hello David,

please, see my comments inline.

I didn't see the previous versions of the mempool (well, only very roughly) so I am
probably missing some points... My point of view is as a user of the handler API.
I need to understand the API to implement a custom handler for my purposes.

On Thu, 19 May 2016 14:44:59 +0100
David Hunt <david.hunt@intel.com> wrote:

> Until now, the objects stored in mempool mempool were internally stored a

s/mempool mempool/mempool/

stored _in_ a ring?

> ring. This patch introduce the possibility to register external handlers
> replacing the ring.
> 
> The default behavior remains unchanged, but calling the new function
> rte_mempool_set_handler() right after rte_mempool_create_empty() allows to
> change the handler that will be used when populating the mempool.
> 
> v5 changes: rebasing on top of 35 patch set mempool work.
> 
> Signed-off-by: David Hunt <david.hunt@intel.com>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> 
> ---
> app/test/test_mempool_perf.c               |   1 -
>  lib/librte_mempool/Makefile                |   2 +
>  lib/librte_mempool/rte_mempool.c           |  73 ++++------
>  lib/librte_mempool/rte_mempool.h           | 212 +++++++++++++++++++++++++----
>  lib/librte_mempool/rte_mempool_default.c   | 147 ++++++++++++++++++++
>  lib/librte_mempool/rte_mempool_handler.c   | 139 +++++++++++++++++++
>  lib/librte_mempool/rte_mempool_version.map |   4 +
>  7 files changed, 506 insertions(+), 72 deletions(-)
>  create mode 100644 lib/librte_mempool/rte_mempool_default.c
>  create mode 100644 lib/librte_mempool/rte_mempool_handler.c
> 
> diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
> index cdc02a0..091c1df 100644
> --- a/app/test/test_mempool_perf.c
> +++ b/app/test/test_mempool_perf.c
> @@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
>  							   n_get_bulk);
>  				if (unlikely(ret < 0)) {
>  					rte_mempool_dump(stdout, mp);
> -					rte_ring_dump(stdout, mp->ring);
>  					/* in this case, objects are lost... */
>  					return -1;
>  				}

I think, this should be in a separate patch explaining the reason to remove it.

> diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
> index 43423e0..f19366e 100644
> --- a/lib/librte_mempool/Makefile
> +++ b/lib/librte_mempool/Makefile
> @@ -42,6 +42,8 @@ LIBABIVER := 2
>  
>  # all source are stored in SRCS-y
>  SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
> +SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_handler.c
> +SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
>  # install includes
>  SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
>  
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index 1ab6701..6ec2b3f 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -148,7 +148,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, phys_addr_t physaddr)
>  #endif
>  
>  	/* enqueue in ring */
> -	rte_ring_sp_enqueue(mp->ring, obj);
> +	rte_mempool_ext_put_bulk(mp, &obj, 1);

I suppose this is OK, however, replacing "enqueue" by "put" (semantically) sounds to me
like a bug. Enqueue is put into a queue. Put is to drop a reference.

>  }
>  
>  /* call obj_cb() for each mempool element */
> @@ -300,40 +300,6 @@ rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
>  	return (size_t)paddr_idx << pg_shift;
>  }
>  
> -/* create the internal ring */
> -static int
> -rte_mempool_ring_create(struct rte_mempool *mp)
> -{
> -	int rg_flags = 0, ret;
> -	char rg_name[RTE_RING_NAMESIZE];
> -	struct rte_ring *r;
> -
> -	ret = snprintf(rg_name, sizeof(rg_name),
> -		RTE_MEMPOOL_MZ_FORMAT, mp->name);
> -	if (ret < 0 || ret >= (int)sizeof(rg_name))
> -		return -ENAMETOOLONG;
> -
> -	/* ring flags */
> -	if (mp->flags & MEMPOOL_F_SP_PUT)
> -		rg_flags |= RING_F_SP_ENQ;
> -	if (mp->flags & MEMPOOL_F_SC_GET)
> -		rg_flags |= RING_F_SC_DEQ;
> -
> -	/* Allocate the ring that will be used to store objects.
> -	 * Ring functions will return appropriate errors if we are
> -	 * running as a secondary process etc., so no checks made
> -	 * in this function for that condition.
> -	 */
> -	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
> -		mp->socket_id, rg_flags);
> -	if (r == NULL)
> -		return -rte_errno;
> -
> -	mp->ring = r;
> -	mp->flags |= MEMPOOL_F_RING_CREATED;
> -	return 0;
> -}

This is a big change. I suggest (if possible) to make a separate patch with
something like "replace rte_mempool_ring_create by ...". Where is this code
placed now?

> -
>  /* free a memchunk allocated with rte_memzone_reserve() */
>  static void
>  rte_mempool_memchunk_mz_free(__rte_unused struct rte_mempool_memhdr *memhdr,
> @@ -351,7 +317,7 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
>  	void *elt;
>  
>  	while (!STAILQ_EMPTY(&mp->elt_list)) {
> -		rte_ring_sc_dequeue(mp->ring, &elt);
> +		rte_mempool_ext_get_bulk(mp, &elt, 1);

Similar as for put_bulk... Replacing "dequeue" by "get" (semantically) sounds to me
like a bug. Dequeue is drop from a queue. Get is to obtain a reference.

>  		(void)elt;
>  		STAILQ_REMOVE_HEAD(&mp->elt_list, next);
>  		mp->populated_size--;
> @@ -380,15 +346,18 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
>  	unsigned i = 0;
>  	size_t off;
>  	struct rte_mempool_memhdr *memhdr;
> -	int ret;
>  
>  	/* create the internal ring if not already done */
>  	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
> -		ret = rte_mempool_ring_create(mp);
> -		if (ret < 0)
> -			return ret;
> +		rte_errno = 0;
> +		mp->pool = rte_mempool_ext_alloc(mp);
> +		if (mp->pool == NULL) {
> +			if (rte_errno == 0)
> +				return -EINVAL;
> +			else
> +				return -rte_errno;
> +		}
>  	}
> -

Is this a whitespace change?

>  	/* mempool is already populated */
>  	if (mp->populated_size >= mp->size)
>  		return -ENOSPC;
> @@ -700,7 +669,7 @@ rte_mempool_free(struct rte_mempool *mp)
>  	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
>  
>  	rte_mempool_free_memchunks(mp);
> -	rte_ring_free(mp->ring);
> +	rte_mempool_ext_free(mp);
>  	rte_memzone_free(mp->mz);
>  }
>  
> @@ -812,6 +781,20 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
>  		RTE_PTR_ADD(mp, MEMPOOL_HEADER_SIZE(mp, 0));
>  
>  	te->data = mp;
> +
> +	/*
> +	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
> +	 * set the correct index into the handler table.
> +	 */
> +	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> +		rte_mempool_set_handler(mp, "ring_sp_sc");
> +	else if (flags & MEMPOOL_F_SP_PUT)
> +		rte_mempool_set_handler(mp, "ring_sp_mc");
> +	else if (flags & MEMPOOL_F_SC_GET)
> +		rte_mempool_set_handler(mp, "ring_mp_sc");
> +	else
> +		rte_mempool_set_handler(mp, "ring_mp_mc");
> +

Do I understand it well that this code preserves behaviour of the previous API?
Because otherwise it looks strange.

>  	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
>  	TAILQ_INSERT_TAIL(mempool_list, te, next);
>  	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
> @@ -927,7 +910,7 @@ rte_mempool_count(const struct rte_mempool *mp)
>  	unsigned count;
>  	unsigned lcore_id;
>  
> -	count = rte_ring_count(mp->ring);
> +	count = rte_mempool_ext_get_count(mp);
>  
>  	if (mp->cache_size == 0)
>  		return count;
> @@ -1120,7 +1103,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
>  
>  	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
>  	fprintf(f, "  flags=%x\n", mp->flags);
> -	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
> +	fprintf(f, "  pool=%p\n", mp->pool);
>  	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->mz->phys_addr);
>  	fprintf(f, "  nb_mem_chunks=%u\n", mp->nb_mem_chunks);
>  	fprintf(f, "  size=%"PRIu32"\n", mp->size);
> @@ -1141,7 +1124,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
>  	}
>  
>  	cache_count = rte_mempool_dump_cache(f, mp);
> -	common_count = rte_ring_count(mp->ring);
> +	common_count = rte_mempool_ext_get_count(mp);
>  	if ((cache_count + common_count) > mp->size)
>  		common_count = mp->size - cache_count;
>  	fprintf(f, "  common_pool_count=%u\n", common_count);
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index 60339bd..ed2c110 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -67,6 +67,7 @@
>  #include <inttypes.h>
>  #include <sys/queue.h>
>  
> +#include <rte_spinlock.h>
>  #include <rte_log.h>
>  #include <rte_debug.h>
>  #include <rte_lcore.h>
> @@ -203,7 +204,15 @@ struct rte_mempool_memhdr {
>   */
>  struct rte_mempool {
>  	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
> -	struct rte_ring *ring;           /**< Ring to store objects. */
> +	void *pool;                      /**< Ring or ext-pool to store objects. */
> +	/**
> +	 * Index into the array of structs containing callback fn pointers.
> +	 * We're using an index here rather than pointers to the callbacks
> +	 * to facilitate any secondary processes that may want to use
> +	 * this mempool. Any function pointers stored in the mempool
> +	 * directly would not be valid for secondary processes.
> +	 */

I think, this comment should go to the rte_mempool_handler_table definition
leaving a here a short note about it.

> +	int32_t handler_idx;
>  	const struct rte_memzone *mz;    /**< Memzone where pool is allocated */
>  	int flags;                       /**< Flags of the mempool. */
>  	int socket_id;                   /**< Socket id passed at mempool creation. */
> @@ -325,6 +334,175 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
>  #define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
>  #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
>  
> +#define RTE_MEMPOOL_HANDLER_NAMESIZE 32 /**< Max length of handler name. */
> +
> +/** Allocate the external pool. */

What is the purpose of this callback?
What exactly does it allocate?
Some rte_mempool internals?
Or the memory?
What does it return?

> +typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp);
> +
> +/** Free the external pool. */

Why this *_free callback does not accept the rte_mempool param?

> +typedef void (*rte_mempool_free_t)(void *p);
> +
> +/** Put an object in the external pool. */

What is the *p pointer?
What is the obj_table?
Why is it void *?
Why is it const?

> +typedef int (*rte_mempool_put_t)(void *p, void * const *obj_table, unsigned n);

Probably, "unsigned int n" is better.

> +
> +/** Get an object from the external pool. */
> +typedef int (*rte_mempool_get_t)(void *p, void **obj_table, unsigned n);

Probably, "unsigned int n" is better.

> +
> +/** Return the number of available objects in the external pool. */

What is the purpose of the *_get_count callback? I guess it can introduce
race conditions...

> +typedef unsigned (*rte_mempool_get_count)(void *p);

unsigned int

> +
> +/** Structure defining a mempool handler. */

Later in the text, I suggested to rename rte_mempool_handler to rte_mempool_ops.
I believe that it explains the purpose of this struct better. It would improve
consistency in function names (the *_ext_* mark is very strange and inconsistent).

> +struct rte_mempool_handler {
> +	char name[RTE_MEMPOOL_HANDLER_NAMESIZE]; /**< Name of mempool handler */
> +	rte_mempool_alloc_t alloc;       /**< Allocate the external pool. */
> +	rte_mempool_free_t free;         /**< Free the external pool. */
> +	rte_mempool_put_t put;           /**< Put an object. */
> +	rte_mempool_get_t get;           /**< Get an object. */
> +	rte_mempool_get_count get_count; /**< Get the number of available objs. */
> +} __rte_cache_aligned;
> +
> +#define RTE_MEMPOOL_MAX_HANDLER_IDX 16  /**< Max number of registered handlers */
> +
> +/** Structure storing the table of registered handlers. */
> +struct rte_mempool_handler_table {
> +	rte_spinlock_t sl;     /**< Spinlock for add/delete. */
> +	uint32_t num_handlers; /**< Number of handlers in the table. */
> +	/** Storage for all possible handlers. */
> +	struct rte_mempool_handler handler[RTE_MEMPOOL_MAX_HANDLER_IDX];
> +};

The handlers are implemented as an array due to multi-process access.
Is it correct? I'd expect a note about it here.

> +
> +/** Array of registered handlers */
> +extern struct rte_mempool_handler_table rte_mempool_handler_table;
> +
> +/**
> + * @internal Get the mempool handler from its index.
> + *
> + * @param handler_idx
> + *   The index of the handler in the handler table. It must be a valid
> + *   index: (0 <= idx < num_handlers).
> + * @return
> + *   The pointer to the handler in the table.
> + */
> +static struct rte_mempool_handler *
> +rte_mempool_handler_get(int handler_idx)
> +{
> +	return &rte_mempool_handler_table.handler[handler_idx];

Is it always safe? Can we belive the handler_idx is inside the boundaries?
At least some RTE_VERIFY would be nice here...

> +}
> +
> +/**
> + * @internal wrapper for external mempool manager alloc callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @return
> + *   The opaque pointer to the external pool.
> + */
> +void *
> +rte_mempool_ext_alloc(struct rte_mempool *mp);
> +
> +/**
> + * @internal wrapper for external mempool manager get callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @param obj_table
> + *   Pointer to a table of void * pointers (objects).
> + * @param n
> + *   Number of objects to get.
> + * @return
> + *   - 0: Success; got n objects.
> + *   - <0: Error; code of handler get function.

Should this doc be more specific about the possible failures?

> + */
> +static inline int
> +rte_mempool_ext_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
> +{
> +	struct rte_mempool_handler *handler;
> +
> +	handler = rte_mempool_handler_get(mp->handler_idx);
> +	return handler->get(mp->pool, obj_table, n);
> +}
> +
> +/**
> + * @internal wrapper for external mempool manager put callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @param obj_table
> + *   Pointer to a table of void * pointers (objects).
> + * @param n
> + *   Number of objects to put.
> + * @return
> + *   - 0: Success; n objects supplied.
> + *   - <0: Error; code of handler put function.

Should this doc be more specific about the possible failures?

> + */
> +static inline int
> +rte_mempool_ext_put_bulk(struct rte_mempool *mp, void * const *obj_table,
> +		unsigned n)
> +{
> +	struct rte_mempool_handler *handler;
> +
> +	handler = rte_mempool_handler_get(mp->handler_idx);
> +	return handler->put(mp->pool, obj_table, n);
> +}
> +
> +/**
> + * @internal wrapper for external mempool manager get_count callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @return
> + *   The number of available objects in the external pool.
> + */
> +unsigned

unsigned int

> +rte_mempool_ext_get_count(const struct rte_mempool *mp);
> +
> +/**
> + * @internal wrapper for external mempool manager free callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + */
> +void
> +rte_mempool_ext_free(struct rte_mempool *mp);
> +
> +/**
> + * Set the handler of a mempool
> + *
> + * This can only be done on a mempool that is not populated, i.e. just after
> + * a call to rte_mempool_create_empty().
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @param name
> + *   Name of the handler.
> + * @return
> + *   - 0: Sucess; the new handler is configured.
> + *   - <0: Error (errno)

Should this doc be more specific about the possible failures?

The body of rte_mempool_set_handler does not set errno at all.
It returns e.g. -EEXIST.

> + */
> +int
> +rte_mempool_set_handler(struct rte_mempool *mp, const char *name);
> +
> +/**
> + * Register an external pool handler.
> + *
> + * @param h
> + *   Pointer to the external pool handler
> + * @return
> + *   - >=0: Sucess; return the index of the handler in the table.
> + *   - <0: Error (errno)

Should this doc be more specific about the possible failures?

> + */
> +int rte_mempool_handler_register(struct rte_mempool_handler *h);
> +
> +/**
> + * Macro to statically register an external pool handler.
> + */
> +#define MEMPOOL_REGISTER_HANDLER(h)					\
> +	void mp_hdlr_init_##h(void);					\
> +	void __attribute__((constructor, used)) mp_hdlr_init_##h(void)	\
> +	{								\
> +		rte_mempool_handler_register(&h);			\
> +	}
> +

There might be a little catch. If there is no more room for handlers, calling the
rte_mempool_handler_register would fail silently as the error reporting does not
work when calling a constructor (or at least, this is my experience).

Not a big deal but...

>  /**
>   * An object callback function for mempool.
>   *
> @@ -736,7 +914,7 @@ void rte_mempool_dump(FILE *f, struct rte_mempool *mp);
>   */
>  static inline void __attribute__((always_inline))
>  __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
> -		    unsigned n, int is_mp)
> +		    unsigned n, __rte_unused int is_mp)
>  {
>  	struct rte_mempool_cache *cache;
>  	uint32_t index;
> @@ -774,7 +952,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>  	cache->len += n;
>  
>  	if (cache->len >= flushthresh) {
> -		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
> +		rte_mempool_ext_put_bulk(mp, &cache->objs[cache_size],
>  				cache->len - cache_size);
>  		cache->len = cache_size;
>  	}
> @@ -782,26 +960,10 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>  	return;
>  
>  ring_enqueue:
> -
>  	/* push remaining objects in ring */
> -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> -	if (is_mp) {
> -		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
> -			rte_panic("cannot put objects in mempool\n");
> -	}
> -	else {
> -		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
> -			rte_panic("cannot put objects in mempool\n");
> -	}
> -#else
> -	if (is_mp)
> -		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
> -	else
> -		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
> -#endif
> +	rte_mempool_ext_put_bulk(mp, obj_table, n);

This is a big change. Does it remove the RTE_LIBRTE_MEMPOOL_DEBUG config option
entirely? If so, I suggest to first do this in a separated patch and then
replace the original *_enqueue_bulk by your *_ext_put_bulk (or better *_ops_put_bulk
as I explain below).

>  }
>  
> -
>  /**
>   * Put several objects back in the mempool (multi-producers safe).
>   *
> @@ -922,7 +1084,7 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
>   */
>  static inline int __attribute__((always_inline))
>  __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
> -		   unsigned n, int is_mc)
> +		   unsigned n, __rte_unused int is_mc)
>  {
>  	int ret;
>  	struct rte_mempool_cache *cache;
> @@ -945,7 +1107,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>  		uint32_t req = n + (cache_size - cache->len);
>  
>  		/* How many do we require i.e. number to fill the cache + the request */
> -		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
> +		ret = rte_mempool_ext_get_bulk(mp,
> +			&cache->objs[cache->len], req);
>  		if (unlikely(ret < 0)) {
>  			/*
>  			 * In the offchance that we are buffer constrained,
> @@ -972,10 +1135,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>  ring_dequeue:
>  
>  	/* get remaining objects from ring */
> -	if (is_mc)
> -		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
> -	else
> -		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
> +	ret = rte_mempool_ext_get_bulk(mp, obj_table, n);
>  
>  	if (ret < 0)
>  		__MEMPOOL_STAT_ADD(mp, get_fail, n);
> diff --git a/lib/librte_mempool/rte_mempool_default.c b/lib/librte_mempool/rte_mempool_default.c
> new file mode 100644
> index 0000000..a6ac65a
> --- /dev/null
> +++ b/lib/librte_mempool/rte_mempool_default.c
> @@ -0,0 +1,147 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
> + *   All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#include <stdio.h>
> +#include <string.h>
> +
> +#include <rte_errno.h>
> +#include <rte_ring.h>
> +#include <rte_mempool.h>
> +
> +static int
> +common_ring_mp_put(void *p, void * const *obj_table, unsigned n)
> +{
> +	return rte_ring_mp_enqueue_bulk((struct rte_ring *)p, obj_table, n);
> +}
> +
> +static int
> +common_ring_sp_put(void *p, void * const *obj_table, unsigned n)
> +{
> +	return rte_ring_sp_enqueue_bulk((struct rte_ring *)p, obj_table, n);
> +}
> +
> +static int
> +common_ring_mc_get(void *p, void **obj_table, unsigned n)
> +{
> +	return rte_ring_mc_dequeue_bulk((struct rte_ring *)p, obj_table, n);
> +}
> +
> +static int
> +common_ring_sc_get(void *p, void **obj_table, unsigned n)
> +{
> +	return rte_ring_sc_dequeue_bulk((struct rte_ring *)p, obj_table, n);
> +}
> +
> +static unsigned
> +common_ring_get_count(void *p)
> +{
> +	return rte_ring_count((struct rte_ring *)p);
> +}
> +
> +
> +static void *
> +common_ring_alloc(struct rte_mempool *mp)
> +{
> +	int rg_flags = 0, ret;
> +	char rg_name[RTE_RING_NAMESIZE];
> +	struct rte_ring *r;
> +
> +	ret = snprintf(rg_name, sizeof(rg_name),
> +		RTE_MEMPOOL_MZ_FORMAT, mp->name);
> +	if (ret < 0 || ret >= (int)sizeof(rg_name)) {
> +		rte_errno = ENAMETOOLONG;
> +		return NULL;
> +	}
> +
> +	/* ring flags */
> +	if (mp->flags & MEMPOOL_F_SP_PUT)
> +		rg_flags |= RING_F_SP_ENQ;
> +	if (mp->flags & MEMPOOL_F_SC_GET)
> +		rg_flags |= RING_F_SC_DEQ;
> +
> +	/* Allocate the ring that will be used to store objects.
> +	 * Ring functions will return appropriate errors if we are
> +	 * running as a secondary process etc., so no checks made
> +	 * in this function for that condition. */
> +	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
> +		mp->socket_id, rg_flags);
> +
> +	return r;
> +}
> +
> +static void
> +common_ring_free(void *p)
> +{
> +	rte_ring_free((struct rte_ring *)p);
> +}
> +
> +static struct rte_mempool_handler handler_mp_mc = {
> +	.name = "ring_mp_mc",
> +	.alloc = common_ring_alloc,
> +	.free = common_ring_free,
> +	.put = common_ring_mp_put,
> +	.get = common_ring_mc_get,
> +	.get_count = common_ring_get_count,
> +};
> +
> +static struct rte_mempool_handler handler_sp_sc = {
> +	.name = "ring_sp_sc",
> +	.alloc = common_ring_alloc,
> +	.free = common_ring_free,
> +	.put = common_ring_sp_put,
> +	.get = common_ring_sc_get,
> +	.get_count = common_ring_get_count,
> +};
> +
> +static struct rte_mempool_handler handler_mp_sc = {
> +	.name = "ring_mp_sc",
> +	.alloc = common_ring_alloc,
> +	.free = common_ring_free,
> +	.put = common_ring_mp_put,
> +	.get = common_ring_sc_get,
> +	.get_count = common_ring_get_count,
> +};
> +
> +static struct rte_mempool_handler handler_sp_mc = {
> +	.name = "ring_sp_mc",
> +	.alloc = common_ring_alloc,
> +	.free = common_ring_free,
> +	.put = common_ring_sp_put,
> +	.get = common_ring_mc_get,
> +	.get_count = common_ring_get_count,
> +};
> +

Introducing those handlers can go as a separate patch. IMHO, that would simplify
the review process a lot. First introduce the mechanism, then add something
inside.

I'd also note that those handlers are always available and what kind of memory
do they use...

> +MEMPOOL_REGISTER_HANDLER(handler_mp_mc);
> +MEMPOOL_REGISTER_HANDLER(handler_sp_sc);
> +MEMPOOL_REGISTER_HANDLER(handler_mp_sc);
> +MEMPOOL_REGISTER_HANDLER(handler_sp_mc);
> diff --git a/lib/librte_mempool/rte_mempool_handler.c b/lib/librte_mempool/rte_mempool_handler.c
> new file mode 100644
> index 0000000..78611f8
> --- /dev/null
> +++ b/lib/librte_mempool/rte_mempool_handler.c
> @@ -0,0 +1,139 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2016 Intel Corporation. All rights reserved.
> + *   Copyright(c) 2016 6WIND S.A.
> + *   All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#include <stdio.h>
> +#include <string.h>
> +
> +#include <rte_mempool.h>
> +
> +/* indirect jump table to support external memory pools */
> +struct rte_mempool_handler_table rte_mempool_handler_table = {
> +	.sl =  RTE_SPINLOCK_INITIALIZER ,
> +	.num_handlers = 0
> +};
> +
> +/* add a new handler in rte_mempool_handler_table, return its index */

It seems to me that there is no way how to put some opaque pointer into the
handler. In such case I would expect I can do something like:

struct my_handler {
	struct rte_mempool_handler h;
	...
} handler;

rte_mempool_handler_register(&handler.h);

But I cannot because you copy the contents of the handler. By the way, this
should be documented.

How can I pass an opaque pointer here? The only way I see is through the
rte_mempool.pool. In that case, what about renaming the rte_mempool_handler
to rte_mempool_ops? Because semantically, it is not a handler, it just holds
the operations.

This would improve some namings:

rte_mempool_ext_alloc -> rte_mempool_ops_alloc
rte_mempool_ext_free -> rte_mempool_ops_free
rte_mempool_ext_get_count -> rte_mempool_ops_get_count
rte_mempool_handler_register -> rte_mempool_ops_register

seems to be more readable to me. The *_ext_* mark does not say anything valuable.
It just scares a bit :).

> +int
> +rte_mempool_handler_register(struct rte_mempool_handler *h)
> +{
> +	struct rte_mempool_handler *handler;
> +	int16_t handler_idx;
> +
> +	rte_spinlock_lock(&rte_mempool_handler_table.sl);
> +
> +	if (rte_mempool_handler_table.num_handlers >= RTE_MEMPOOL_MAX_HANDLER_IDX) {
> +		rte_spinlock_unlock(&rte_mempool_handler_table.sl);
> +		RTE_LOG(ERR, MEMPOOL,
> +			"Maximum number of mempool handlers exceeded\n");
> +		return -ENOSPC;
> +	}
> +
> +	if (h->put == NULL || h->get == NULL || h->get_count == NULL) {
> +		rte_spinlock_unlock(&rte_mempool_handler_table.sl);
> +		RTE_LOG(ERR, MEMPOOL,
> +			"Missing callback while registering mempool handler\n");
> +		return -EINVAL;
> +	}
> +
> +	handler_idx = rte_mempool_handler_table.num_handlers++;
> +	handler = &rte_mempool_handler_table.handler[handler_idx];
> +	snprintf(handler->name, sizeof(handler->name), "%s", h->name);
> +	handler->alloc = h->alloc;
> +	handler->put = h->put;
> +	handler->get = h->get;
> +	handler->get_count = h->get_count;
> +
> +	rte_spinlock_unlock(&rte_mempool_handler_table.sl);
> +
> +	return handler_idx;
> +}
> +
> +/* wrapper to allocate an external pool handler */
> +void *
> +rte_mempool_ext_alloc(struct rte_mempool *mp)
> +{
> +	struct rte_mempool_handler *handler;
> +
> +	handler = rte_mempool_handler_get(mp->handler_idx);
> +	if (handler->alloc == NULL)
> +		return NULL;
> +	return handler->alloc(mp);
> +}
> +
> +/* wrapper to free an external pool handler */
> +void
> +rte_mempool_ext_free(struct rte_mempool *mp)
> +{
> +	struct rte_mempool_handler *handler;
> +
> +	handler = rte_mempool_handler_get(mp->handler_idx);
> +	if (handler->free == NULL)
> +		return;
> +	return handler->free(mp);
> +}
> +
> +/* wrapper to get available objects in an external pool handler */
> +unsigned
> +rte_mempool_ext_get_count(const struct rte_mempool *mp)
> +{
> +	struct rte_mempool_handler *handler;
> +
> +	handler = rte_mempool_handler_get(mp->handler_idx);
> +	return handler->get_count(mp->pool);
> +}
> +
> +/* set the handler of a mempool */

The doc comment should say "this sets a handler previously registered by
the rte_mempool_handler_register function ...". I was confused and didn't
understand how the handlers are inserted into the table.

> +int
> +rte_mempool_set_handler(struct rte_mempool *mp, const char *name)
> +{
> +	struct rte_mempool_handler *handler = NULL;
> +	unsigned i;
> +
> +	/* too late, the mempool is already populated */
> +	if (mp->flags & MEMPOOL_F_RING_CREATED)
> +		return -EEXIST;
> +
> +	for (i = 0; i < rte_mempool_handler_table.num_handlers; i++) {
> +		if (!strcmp(name, rte_mempool_handler_table.handler[i].name)) {
> +			handler = &rte_mempool_handler_table.handler[i];
> +			break;
> +		}
> +	}
> +
> +	if (handler == NULL)
> +		return -EINVAL;
> +
> +	mp->handler_idx = i;
> +	return 0;
> +}
> diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
> index f63461b..a0e9aed 100644
> --- a/lib/librte_mempool/rte_mempool_version.map
> +++ b/lib/librte_mempool/rte_mempool_version.map
> @@ -19,6 +19,8 @@ DPDK_2.0 {
>  DPDK_16.7 {
>  	global:
>  
> +	rte_mempool_handler_table;
> +
>  	rte_mempool_check_cookies;
>  	rte_mempool_obj_iter;
>  	rte_mempool_mem_iter;
> @@ -29,6 +31,8 @@ DPDK_16.7 {
>  	rte_mempool_populate_default;
>  	rte_mempool_populate_anon;
>  	rte_mempool_free;
> +	rte_mempool_set_handler;
> +	rte_mempool_handler_register;
>  
>  	local: *;
>  } DPDK_2.0;

Regards
Jan

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [dpdk-dev, v5, 3/3] mbuf: get default mempool handler from configuration
  2016-05-19 13:45         ` [PATCH v5 3/3] mbuf: get default mempool handler from configuration David Hunt
@ 2016-05-23 12:40           ` Jan Viktorin
  2016-05-31  9:26             ` Hunt, David
  0 siblings, 1 reply; 238+ messages in thread
From: Jan Viktorin @ 2016-05-23 12:40 UTC (permalink / raw)
  To: David Hunt; +Cc: dev, olivier.matz, yuanhan.liu, pmatilai

On Thu, 19 May 2016 14:45:01 +0100
David Hunt <david.hunt@intel.com> wrote:

> By default, the mempool handler used for mbuf allocations is a multi
> producer and multi consumer ring. We could imagine a target (maybe some
> network processors?) that provides an hardware-assisted pool
> mechanism. In this case, the default configuration for this architecture
> would contain a different value for RTE_MBUF_DEFAULT_MEMPOOL_HANDLER.
> 
> Signed-off-by: David Hunt <david.hunt@intel.com>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> 
> ---
> config/common_base         |  1 +
>  lib/librte_mbuf/rte_mbuf.c | 21 +++++++++++++++++----
>  2 files changed, 18 insertions(+), 4 deletions(-)
> 
> diff --git a/config/common_base b/config/common_base
> index 3535c6e..5cf5e52 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -394,6 +394,7 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
>  #
>  CONFIG_RTE_LIBRTE_MBUF=y
>  CONFIG_RTE_LIBRTE_MBUF_DEBUG=n
> +CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_HANDLER="ring_mp_mc"
>  CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
>  CONFIG_RTE_PKTMBUF_HEADROOM=128
>  
> diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
> index eec1456..5dcdc05 100644
> --- a/lib/librte_mbuf/rte_mbuf.c
> +++ b/lib/librte_mbuf/rte_mbuf.c
> @@ -153,6 +153,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
>  	unsigned cache_size, uint16_t priv_size, uint16_t data_room_size,
>  	int socket_id)
>  {
> +	struct rte_mempool *mp;
>  	struct rte_pktmbuf_pool_private mbp_priv;
>  	unsigned elt_size;
>  
> @@ -167,10 +168,22 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
>  	mbp_priv.mbuf_data_room_size = data_room_size;
>  	mbp_priv.mbuf_priv_size = priv_size;
>  
> -	return rte_mempool_create(name, n, elt_size,
> -		cache_size, sizeof(struct rte_pktmbuf_pool_private),
> -		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
> -		socket_id, 0);
> +	mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
> +		 sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
> +	if (mp == NULL)
> +		return NULL;
> +
> +	rte_mempool_set_handler(mp, RTE_MBUF_DEFAULT_MEMPOOL_HANDLER);

Check for a failure is missing here. Especially -EEXIST.

> +	rte_pktmbuf_pool_init(mp, &mbp_priv);
> +
> +	if (rte_mempool_populate_default(mp) < 0) {
> +		rte_mempool_free(mp);
> +		return NULL;
> +	}
> +
> +	rte_mempool_obj_iter(mp, rte_pktmbuf_init, NULL);
> +
> +	return mp;
>  }
>  
>  /* do some sanity checks on a mbuf: panic if it fails */



-- 
   Jan Viktorin                  E-mail: Viktorin@RehiveTech.com
   System Architect              Web:    www.RehiveTech.com
   RehiveTech
   Brno, Czech Republic

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [dpdk-dev, v5, 2/3] app/test: test external mempool handler
  2016-05-19 13:45         ` [PATCH v5 2/3] app/test: test external mempool handler David Hunt
@ 2016-05-23 12:45           ` Jan Viktorin
  2016-05-31  9:17             ` Hunt, David
  0 siblings, 1 reply; 238+ messages in thread
From: Jan Viktorin @ 2016-05-23 12:45 UTC (permalink / raw)
  To: David Hunt; +Cc: dev, olivier.matz, yuanhan.liu, pmatilai

On Thu, 19 May 2016 14:45:00 +0100
David Hunt <david.hunt@intel.com> wrote:

> Use a minimal custom mempool external handler and check that it also
> passes basic mempool autotests.
> 
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: David Hunt <david.hunt@intel.com>
> 
> ---
> app/test/test_mempool.c | 113 ++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 113 insertions(+)
> 
> diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
> index 9f02758..f55d126 100644
> --- a/app/test/test_mempool.c
> +++ b/app/test/test_mempool.c
> @@ -85,6 +85,96 @@
>  static rte_atomic32_t synchro;
>  
>  /*
> + * Simple example of custom mempool structure. Holds pointers to all the
> + * elements which are simply malloc'd in this example.
> + */
> +struct custom_mempool {
> +	rte_spinlock_t lock;
> +	unsigned count;
> +	unsigned size;
> +	void *elts[];
> +};
> +
> +/*
> + * Loop though all the element pointers and allocate a chunk of memory, then

s/though/through/

> + * insert that memory into the ring.
> + */
> +static void *
> +custom_mempool_alloc(struct rte_mempool *mp)
> +{
> +	struct custom_mempool *cm;
> +
> +	cm = rte_zmalloc("custom_mempool",
> +		sizeof(struct custom_mempool) + mp->size * sizeof(void *), 0);
> +	if (cm == NULL)
> +		return NULL;
> +
> +	rte_spinlock_init(&cm->lock);
> +	cm->count = 0;
> +	cm->size = mp->size;
> +	return cm;
> +}
> +
> +static void
> +custom_mempool_free(void *p)
> +{
> +	rte_free(p);
> +}
> +
> +static int
> +custom_mempool_put(void *p, void * const *obj_table, unsigned n)
> +{
> +	struct custom_mempool *cm = (struct custom_mempool *)p;
> +	int ret = 0;
> +
> +	rte_spinlock_lock(&cm->lock);
> +	if (cm->count + n > cm->size) {
> +		ret = -ENOBUFS;
> +	} else {
> +		memcpy(&cm->elts[cm->count], obj_table, sizeof(void *) * n);
> +		cm->count += n;
> +	}
> +	rte_spinlock_unlock(&cm->lock);
> +	return ret;
> +}
> +
> +
> +static int
> +custom_mempool_get(void *p, void **obj_table, unsigned n)
> +{
> +	struct custom_mempool *cm = (struct custom_mempool *)p;
> +	int ret = 0;
> +
> +	rte_spinlock_lock(&cm->lock);
> +	if (n > cm->count) {
> +		ret = -ENOENT;
> +	} else {
> +		cm->count -= n;
> +		memcpy(obj_table, &cm->elts[cm->count], sizeof(void *) * n);
> +	}
> +	rte_spinlock_unlock(&cm->lock);
> +	return ret;
> +}
> +
> +static unsigned
> +custom_mempool_get_count(void *p)
> +{
> +	struct custom_mempool *cm = (struct custom_mempool *)p;
> +	return cm->count;
> +}
> +
> +static struct rte_mempool_handler mempool_handler_custom = {
> +	.name = "custom_handler",
> +	.alloc = custom_mempool_alloc,
> +	.free = custom_mempool_free,
> +	.put = custom_mempool_put,
> +	.get = custom_mempool_get,
> +	.get_count = custom_mempool_get_count,
> +};
> +
> +MEMPOOL_REGISTER_HANDLER(mempool_handler_custom);

What about to drop the rte_mempool_handler.name field and derive the
name from the variable name given to the MEMPOOL_REGISTER_HANDLER.
The MEMPOOL_REGISTER_HANDLER sould do some macro magic inside and call

  rte_mempool_handler_register(name, handler);

Just an idea...

> +
> +/*
>   * save the object number in the first 4 bytes of object data. All
>   * other bytes are set to 0.
>   */
> @@ -479,6 +569,7 @@ test_mempool(void)
>  {
>  	struct rte_mempool *mp_cache = NULL;
>  	struct rte_mempool *mp_nocache = NULL;
> +	struct rte_mempool *mp_ext = NULL;
>  
>  	rte_atomic32_init(&synchro);
>  
> @@ -507,6 +598,27 @@ test_mempool(void)
>  		goto err;
>  	}
>  
> +	/* create a mempool with an external handler */
> +	mp_ext = rte_mempool_create_empty("test_ext",
> +		MEMPOOL_SIZE,
> +		MEMPOOL_ELT_SIZE,
> +		RTE_MEMPOOL_CACHE_MAX_SIZE, 0,
> +		SOCKET_ID_ANY, 0);
> +
> +	if (mp_ext == NULL) {
> +		printf("cannot allocate mp_ext mempool\n");
> +		goto err;
> +	}
> +	if (rte_mempool_set_handler(mp_ext, "custom_handler") < 0) {
> +		printf("cannot set custom handler\n");
> +		goto err;
> +	}
> +	if (rte_mempool_populate_default(mp_ext) < 0) {
> +		printf("cannot populate mp_ext mempool\n");
> +		goto err;
> +	}
> +	rte_mempool_obj_iter(mp_ext, my_obj_init, NULL);
> +

The test becomes quite complex. What about having several smaller
tests with a clear setup and cleanup steps?

>  	/* retrieve the mempool from its name */
>  	if (rte_mempool_lookup("test_nocache") != mp_nocache) {
>  		printf("Cannot lookup mempool from its name\n");
> @@ -547,6 +659,7 @@ test_mempool(void)
>  err:
>  	rte_mempool_free(mp_nocache);
>  	rte_mempool_free(mp_cache);
> +	rte_mempool_free(mp_ext);
>  	return -1;
>  }
>  

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [dpdk-dev,v5,1/3] mempool: support external handler
  2016-05-23 12:35           ` [dpdk-dev,v5,1/3] " Jan Viktorin
@ 2016-05-24 14:04             ` Hunt, David
  2016-05-31  9:09             ` Hunt, David
  1 sibling, 0 replies; 238+ messages in thread
From: Hunt, David @ 2016-05-24 14:04 UTC (permalink / raw)
  To: Jan Viktorin; +Cc: dev, olivier.matz, yuanhan.liu, pmatilai



On 5/23/2016 1:35 PM, Jan Viktorin wrote:
> Hello David,
>
> please, see my comments inline.
>
> I didn't see the previous versions of the mempool (well, only very roughly) so I am
> probably missing some points... My point of view is as a user of the handler API.
> I need to understand the API to implement a custom handler for my purposes.

Thanks for the review, Jan.

I'm working on the changes now, will post soon. I'll reply to each of 
you're emails when I'm ready with the patch.

Regards,
David.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v5 1/3] mempool: support external handler
  2016-05-19 13:44         ` [PATCH v5 1/3] mempool: support external handler David Hunt
  2016-05-23 12:35           ` [dpdk-dev,v5,1/3] " Jan Viktorin
@ 2016-05-24 15:35           ` Jerin Jacob
  2016-05-27  9:52             ` Hunt, David
  1 sibling, 1 reply; 238+ messages in thread
From: Jerin Jacob @ 2016-05-24 15:35 UTC (permalink / raw)
  To: David Hunt; +Cc: dev, olivier.matz, yuanhan.liu, pmatilai

On Thu, May 19, 2016 at 02:44:59PM +0100, David Hunt wrote:
> Until now, the objects stored in mempool mempool were internally stored a
> ring. This patch introduce the possibility to register external handlers
> replacing the ring.
> 
> The default behavior remains unchanged, but calling the new function
> rte_mempool_set_handler() right after rte_mempool_create_empty() allows to
> change the handler that will be used when populating the mempool.
> 
> v5 changes: rebasing on top of 35 patch set mempool work.
> 
> Signed-off-by: David Hunt <david.hunt@intel.com>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> ---
>  app/test/test_mempool_perf.c               |   1 -
>  lib/librte_mempool/Makefile                |   2 +
>  lib/librte_mempool/rte_mempool.c           |  73 ++++------
>  lib/librte_mempool/rte_mempool.h           | 212 +++++++++++++++++++++++++----
>  lib/librte_mempool/rte_mempool_default.c   | 147 ++++++++++++++++++++
>  lib/librte_mempool/rte_mempool_handler.c   | 139 +++++++++++++++++++
>  lib/librte_mempool/rte_mempool_version.map |   4 +
>  7 files changed, 506 insertions(+), 72 deletions(-)
>  create mode 100644 lib/librte_mempool/rte_mempool_default.c
>  create mode 100644 lib/librte_mempool/rte_mempool_handler.c
> 
> diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
> index cdc02a0..091c1df 100644
> --- a/app/test/test_mempool_perf.c
> +++ b/app/test/test_mempool_perf.c
> @@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
>  							   n_get_bulk);
>  				if (unlikely(ret < 0)) {
>  					rte_mempool_dump(stdout, mp);
> -					rte_ring_dump(stdout, mp->ring);
>  					/* in this case, objects are lost... */
>  					return -1;
>  				}
> diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
> index 43423e0..f19366e 100644
> --- a/lib/librte_mempool/Makefile
> +++ b/lib/librte_mempool/Makefile
> @@ -42,6 +42,8 @@ LIBABIVER := 2
>  
>  # all source are stored in SRCS-y
>  SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
> +SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_handler.c
> +SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
>  # install includes
>  SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
>  
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index 1ab6701..6ec2b3f 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -148,7 +148,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, phys_addr_t physaddr)
>  #endif
>  
>  	/* enqueue in ring */
> -	rte_ring_sp_enqueue(mp->ring, obj);
> +	rte_mempool_ext_put_bulk(mp, &obj, 1);
>  }
>  
>  /* call obj_cb() for each mempool element */
> @@ -300,40 +300,6 @@ rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
>  	return (size_t)paddr_idx << pg_shift;
>  }
>  
> -/* create the internal ring */
> -static int
> -rte_mempool_ring_create(struct rte_mempool *mp)
> -{
> -	int rg_flags = 0, ret;
> -	char rg_name[RTE_RING_NAMESIZE];
> -	struct rte_ring *r;
> -
> -	ret = snprintf(rg_name, sizeof(rg_name),
> -		RTE_MEMPOOL_MZ_FORMAT, mp->name);
> -	if (ret < 0 || ret >= (int)sizeof(rg_name))
> -		return -ENAMETOOLONG;
> -
> -	/* ring flags */
> -	if (mp->flags & MEMPOOL_F_SP_PUT)
> -		rg_flags |= RING_F_SP_ENQ;
> -	if (mp->flags & MEMPOOL_F_SC_GET)
> -		rg_flags |= RING_F_SC_DEQ;
> -
> -	/* Allocate the ring that will be used to store objects.
> -	 * Ring functions will return appropriate errors if we are
> -	 * running as a secondary process etc., so no checks made
> -	 * in this function for that condition.
> -	 */
> -	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
> -		mp->socket_id, rg_flags);
> -	if (r == NULL)
> -		return -rte_errno;
> -
> -	mp->ring = r;
> -	mp->flags |= MEMPOOL_F_RING_CREATED;
> -	return 0;
> -}
> -
>  /* free a memchunk allocated with rte_memzone_reserve() */
>  static void
>  rte_mempool_memchunk_mz_free(__rte_unused struct rte_mempool_memhdr *memhdr,
> @@ -351,7 +317,7 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
>  	void *elt;
>  
>  	while (!STAILQ_EMPTY(&mp->elt_list)) {
> -		rte_ring_sc_dequeue(mp->ring, &elt);
> +		rte_mempool_ext_get_bulk(mp, &elt, 1);
>  		(void)elt;
>  		STAILQ_REMOVE_HEAD(&mp->elt_list, next);
>  		mp->populated_size--;
> @@ -380,15 +346,18 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
>  	unsigned i = 0;
>  	size_t off;
>  	struct rte_mempool_memhdr *memhdr;
> -	int ret;
>  
>  	/* create the internal ring if not already done */
>  	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
> -		ret = rte_mempool_ring_create(mp);
> -		if (ret < 0)
> -			return ret;
> +		rte_errno = 0;
> +		mp->pool = rte_mempool_ext_alloc(mp);
> +		if (mp->pool == NULL) {
> +			if (rte_errno == 0)
> +				return -EINVAL;
> +			else
> +				return -rte_errno;
> +		}
>  	}
> -
>  	/* mempool is already populated */
>  	if (mp->populated_size >= mp->size)
>  		return -ENOSPC;
> @@ -700,7 +669,7 @@ rte_mempool_free(struct rte_mempool *mp)
>  	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
>  
>  	rte_mempool_free_memchunks(mp);
> -	rte_ring_free(mp->ring);
> +	rte_mempool_ext_free(mp);
>  	rte_memzone_free(mp->mz);
>  }
>  
> @@ -812,6 +781,20 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
>  		RTE_PTR_ADD(mp, MEMPOOL_HEADER_SIZE(mp, 0));
>  
>  	te->data = mp;
> +
> +	/*
> +	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
> +	 * set the correct index into the handler table.
> +	 */
> +	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> +		rte_mempool_set_handler(mp, "ring_sp_sc");
> +	else if (flags & MEMPOOL_F_SP_PUT)
> +		rte_mempool_set_handler(mp, "ring_sp_mc");
> +	else if (flags & MEMPOOL_F_SC_GET)
> +		rte_mempool_set_handler(mp, "ring_mp_sc");
> +	else
> +		rte_mempool_set_handler(mp, "ring_mp_mc");

IMO, We should decouple the implementation specific flags of _a_
external pool manager implementation from the generic rte_mempool_create_empty
function as going further when we introduce new flags for custom HW accelerated
external pool manager then this common code will be bloated.

> +
>  	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
>  	TAILQ_INSERT_TAIL(mempool_list, te, next);
>  	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
> @@ -927,7 +910,7 @@ rte_mempool_count(const struct rte_mempool *mp)
>  	unsigned count;
>  	unsigned lcore_id;
>  
> -	count = rte_ring_count(mp->ring);
> +	count = rte_mempool_ext_get_count(mp);
>  
>  	if (mp->cache_size == 0)
>  		return count;
> @@ -1120,7 +1103,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
>  
>  	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
>  	fprintf(f, "  flags=%x\n", mp->flags);
> -	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
> +	fprintf(f, "  pool=%p\n", mp->pool);
>  	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->mz->phys_addr);
>  	fprintf(f, "  nb_mem_chunks=%u\n", mp->nb_mem_chunks);
>  	fprintf(f, "  size=%"PRIu32"\n", mp->size);
> @@ -1141,7 +1124,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
>  	}
>  
>  	cache_count = rte_mempool_dump_cache(f, mp);
> -	common_count = rte_ring_count(mp->ring);
> +	common_count = rte_mempool_ext_get_count(mp);
>  	if ((cache_count + common_count) > mp->size)
>  		common_count = mp->size - cache_count;
>  	fprintf(f, "  common_pool_count=%u\n", common_count);
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index 60339bd..ed2c110 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -67,6 +67,7 @@
>  #include <inttypes.h>
>  #include <sys/queue.h>
>  
> +#include <rte_spinlock.h>
>  #include <rte_log.h>
>  #include <rte_debug.h>
>  #include <rte_lcore.h>
> @@ -203,7 +204,15 @@ struct rte_mempool_memhdr {
>   */
>  struct rte_mempool {
>  	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
> -	struct rte_ring *ring;           /**< Ring to store objects. */
> +	void *pool;                      /**< Ring or ext-pool to store objects. */
> +	/**
> +	 * Index into the array of structs containing callback fn pointers.
> +	 * We're using an index here rather than pointers to the callbacks
> +	 * to facilitate any secondary processes that may want to use
> +	 * this mempool. Any function pointers stored in the mempool
> +	 * directly would not be valid for secondary processes.
> +	 */
> +	int32_t handler_idx;
>  	const struct rte_memzone *mz;    /**< Memzone where pool is allocated */
>  	int flags;                       /**< Flags of the mempool. */
>  	int socket_id;                   /**< Socket id passed at mempool creation. */
> @@ -325,6 +334,175 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
>  #define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
>  #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
>  
> +#define RTE_MEMPOOL_HANDLER_NAMESIZE 32 /**< Max length of handler name. */
> +
> +/** Allocate the external pool. */
> +typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp);
> +
> +/** Free the external pool. */
> +typedef void (*rte_mempool_free_t)(void *p);
> +
> +/** Put an object in the external pool. */
> +typedef int (*rte_mempool_put_t)(void *p, void * const *obj_table, unsigned n);
> +
> +/** Get an object from the external pool. */
> +typedef int (*rte_mempool_get_t)(void *p, void **obj_table, unsigned n);
> +
> +/** Return the number of available objects in the external pool. */
> +typedef unsigned (*rte_mempool_get_count)(void *p);
> +
> +/** Structure defining a mempool handler. */
> +struct rte_mempool_handler {
> +	char name[RTE_MEMPOOL_HANDLER_NAMESIZE]; /**< Name of mempool handler */
> +	rte_mempool_alloc_t alloc;       /**< Allocate the external pool. */
> +	rte_mempool_free_t free;         /**< Free the external pool. */
> +	rte_mempool_put_t put;           /**< Put an object. */
> +	rte_mempool_get_t get;           /**< Get an object. */
> +	rte_mempool_get_count get_count; /**< Get the number of available objs. */
> +} __rte_cache_aligned;
> +
> +#define RTE_MEMPOOL_MAX_HANDLER_IDX 16  /**< Max number of registered handlers */
> +
> +/** Structure storing the table of registered handlers. */
> +struct rte_mempool_handler_table {
> +	rte_spinlock_t sl;     /**< Spinlock for add/delete. */
> +	uint32_t num_handlers; /**< Number of handlers in the table. */
> +	/** Storage for all possible handlers. */
> +	struct rte_mempool_handler handler[RTE_MEMPOOL_MAX_HANDLER_IDX];
> +};

add __rte_cache_aligned to this structure to avoid "handler" memory
cacheline being shared with other variables

> +
> +/** Array of registered handlers */
> +extern struct rte_mempool_handler_table rte_mempool_handler_table;
> +
> +/**
> + * @internal Get the mempool handler from its index.
> + *
> + * @param handler_idx
> + *   The index of the handler in the handler table. It must be a valid
> + *   index: (0 <= idx < num_handlers).
> + * @return
> + *   The pointer to the handler in the table.
> + */
> +static struct rte_mempool_handler *

inline?

> +rte_mempool_handler_get(int handler_idx)
> +{
> +	return &rte_mempool_handler_table.handler[handler_idx];
> +}
> +
> +/**
> + * @internal wrapper for external mempool manager alloc callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @return
> + *   The opaque pointer to the external pool.
> + */
> +void *
> +rte_mempool_ext_alloc(struct rte_mempool *mp);
> +
> +/**
> + * @internal wrapper for external mempool manager get callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @param obj_table
> + *   Pointer to a table of void * pointers (objects).
> + * @param n
> + *   Number of objects to get.
> + * @return
> + *   - 0: Success; got n objects.
> + *   - <0: Error; code of handler get function.
> + */
> +static inline int
> +rte_mempool_ext_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
> +{
> +	struct rte_mempool_handler *handler;
> +
> +	handler = rte_mempool_handler_get(mp->handler_idx);
> +	return handler->get(mp->pool, obj_table, n);
> +}
> +
> +/**
> + * @internal wrapper for external mempool manager put callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @param obj_table
> + *   Pointer to a table of void * pointers (objects).
> + * @param n
> + *   Number of objects to put.
> + * @return
> + *   - 0: Success; n objects supplied.
> + *   - <0: Error; code of handler put function.
> + */
> +static inline int
> +rte_mempool_ext_put_bulk(struct rte_mempool *mp, void * const *obj_table,
> +		unsigned n)
> +{
> +	struct rte_mempool_handler *handler;
> +
> +	handler = rte_mempool_handler_get(mp->handler_idx);
> +	return handler->put(mp->pool, obj_table, n);
> +}
> +
> +/**
> + * @internal wrapper for external mempool manager get_count callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @return
> + *   The number of available objects in the external pool.
> + */
> +unsigned
> +rte_mempool_ext_get_count(const struct rte_mempool *mp);
> +
> +/**
> + * @internal wrapper for external mempool manager free callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + */
> +void
> +rte_mempool_ext_free(struct rte_mempool *mp);
> +
> +/**
> + * Set the handler of a mempool
> + *
> + * This can only be done on a mempool that is not populated, i.e. just after
> + * a call to rte_mempool_create_empty().
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @param name
> + *   Name of the handler.
> + * @return
> + *   - 0: Sucess; the new handler is configured.
> + *   - <0: Error (errno)
> + */
> +int
> +rte_mempool_set_handler(struct rte_mempool *mp, const char *name);
> +
> +/**
> + * Register an external pool handler.
> + *
> + * @param h
> + *   Pointer to the external pool handler
> + * @return
> + *   - >=0: Sucess; return the index of the handler in the table.
> + *   - <0: Error (errno)
> + */
> +int rte_mempool_handler_register(struct rte_mempool_handler *h);
> +
> +/**
> + * Macro to statically register an external pool handler.
> + */
> +#define MEMPOOL_REGISTER_HANDLER(h)					\
> +	void mp_hdlr_init_##h(void);					\
> +	void __attribute__((constructor, used)) mp_hdlr_init_##h(void)	\
> +	{								\
> +		rte_mempool_handler_register(&h);			\
> +	}
> +
>  /**
>   * An object callback function for mempool.
>   *
> @@ -736,7 +914,7 @@ void rte_mempool_dump(FILE *f, struct rte_mempool *mp);
>   */
>  static inline void __attribute__((always_inline))
>  __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
> -		    unsigned n, int is_mp)
> +		    unsigned n, __rte_unused int is_mp)
>  {
>  	struct rte_mempool_cache *cache;
>  	uint32_t index;
> @@ -774,7 +952,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>  	cache->len += n;
>  
>  	if (cache->len >= flushthresh) {
> -		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
> +		rte_mempool_ext_put_bulk(mp, &cache->objs[cache_size],
>  				cache->len - cache_size);
>  		cache->len = cache_size;
>  	}
> @@ -782,26 +960,10 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>  	return;
>  
>  ring_enqueue:
> -
>  	/* push remaining objects in ring */
> -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> -	if (is_mp) {
> -		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
> -			rte_panic("cannot put objects in mempool\n");
> -	}
> -	else {
> -		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
> -			rte_panic("cannot put objects in mempool\n");
> -	}
> -#else
> -	if (is_mp)
> -		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
> -	else
> -		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
> -#endif
> +	rte_mempool_ext_put_bulk(mp, obj_table, n);
>  }
>  
> -
>  /**
>   * Put several objects back in the mempool (multi-producers safe).
>   *
> @@ -922,7 +1084,7 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
>   */
>  static inline int __attribute__((always_inline))
>  __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
> -		   unsigned n, int is_mc)
> +		   unsigned n, __rte_unused int is_mc)
>  {
>  	int ret;
>  	struct rte_mempool_cache *cache;
> @@ -945,7 +1107,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>  		uint32_t req = n + (cache_size - cache->len);
>  
>  		/* How many do we require i.e. number to fill the cache + the request */
> -		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
> +		ret = rte_mempool_ext_get_bulk(mp,

This makes inline function to a function pointer. Nothing wrong in
that. However, Do you see any performance drop with "local cache" only
use case?

http://dpdk.org/dev/patchwork/patch/12993/

> +			&cache->objs[cache->len], req);
>  		if (unlikely(ret < 0)) {
>  			/*
>  			 * In the offchance that we are buffer constrained,
> @@ -972,10 +1135,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>  ring_dequeue:
>  
>  	/* get remaining objects from ring */
> -	if (is_mc)
> -		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
> -	else
> -		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
> +	ret = rte_mempool_ext_get_bulk(mp, obj_table, n);
>  
>  	if (ret < 0)
>  		__MEMPOOL_STAT_ADD(mp, get_fail, n);
> diff --git a/lib/librte_mempool/rte_mempool_default.c b/lib/librte_mempool/rte_mempool_default.c
> new file mode 100644
> index 0000000..a6ac65a
> --- /dev/null
> +++ b/lib/librte_mempool/rte_mempool_default.c
> @@ -0,0 +1,147 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
> + *   All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#include <stdio.h>
> +#include <string.h>
> +
> +#include <rte_errno.h>
> +#include <rte_ring.h>
> +#include <rte_mempool.h>
> +
> +static int
> +common_ring_mp_put(void *p, void * const *obj_table, unsigned n)
> +{
> +	return rte_ring_mp_enqueue_bulk((struct rte_ring *)p, obj_table, n);
> +}
> +
> +static int
> +common_ring_sp_put(void *p, void * const *obj_table, unsigned n)
> +{
> +	return rte_ring_sp_enqueue_bulk((struct rte_ring *)p, obj_table, n);
> +}
> +
> +static int
> +common_ring_mc_get(void *p, void **obj_table, unsigned n)
> +{
> +	return rte_ring_mc_dequeue_bulk((struct rte_ring *)p, obj_table, n);
> +}
> +
> +static int
> +common_ring_sc_get(void *p, void **obj_table, unsigned n)
> +{
> +	return rte_ring_sc_dequeue_bulk((struct rte_ring *)p, obj_table, n);
> +}
> +
> +static unsigned
> +common_ring_get_count(void *p)
> +{
> +	return rte_ring_count((struct rte_ring *)p);
> +}
> +
> +
> +static void *
> +common_ring_alloc(struct rte_mempool *mp)
> +{
> +	int rg_flags = 0, ret;
> +	char rg_name[RTE_RING_NAMESIZE];
> +	struct rte_ring *r;
> +
> +	ret = snprintf(rg_name, sizeof(rg_name),
> +		RTE_MEMPOOL_MZ_FORMAT, mp->name);
> +	if (ret < 0 || ret >= (int)sizeof(rg_name)) {
> +		rte_errno = ENAMETOOLONG;
> +		return NULL;
> +	}
> +
> +	/* ring flags */
> +	if (mp->flags & MEMPOOL_F_SP_PUT)
> +		rg_flags |= RING_F_SP_ENQ;
> +	if (mp->flags & MEMPOOL_F_SC_GET)
> +		rg_flags |= RING_F_SC_DEQ;
> +
> +	/* Allocate the ring that will be used to store objects.
> +	 * Ring functions will return appropriate errors if we are
> +	 * running as a secondary process etc., so no checks made
> +	 * in this function for that condition. */
> +	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
> +		mp->socket_id, rg_flags);
> +
> +	return r;
> +}
> +
> +static void
> +common_ring_free(void *p)
> +{
> +	rte_ring_free((struct rte_ring *)p);
> +}
> +
> +static struct rte_mempool_handler handler_mp_mc = {
> +	.name = "ring_mp_mc",
> +	.alloc = common_ring_alloc,
> +	.free = common_ring_free,
> +	.put = common_ring_mp_put,
> +	.get = common_ring_mc_get,
> +	.get_count = common_ring_get_count,
> +};
> +
> +static struct rte_mempool_handler handler_sp_sc = {
> +	.name = "ring_sp_sc",
> +	.alloc = common_ring_alloc,
> +	.free = common_ring_free,
> +	.put = common_ring_sp_put,
> +	.get = common_ring_sc_get,
> +	.get_count = common_ring_get_count,
> +};
> +
> +static struct rte_mempool_handler handler_mp_sc = {
> +	.name = "ring_mp_sc",
> +	.alloc = common_ring_alloc,
> +	.free = common_ring_free,
> +	.put = common_ring_mp_put,
> +	.get = common_ring_sc_get,
> +	.get_count = common_ring_get_count,
> +};
> +
> +static struct rte_mempool_handler handler_sp_mc = {
> +	.name = "ring_sp_mc",
> +	.alloc = common_ring_alloc,
> +	.free = common_ring_free,
> +	.put = common_ring_sp_put,
> +	.get = common_ring_mc_get,
> +	.get_count = common_ring_get_count,
> +};
> +
> +MEMPOOL_REGISTER_HANDLER(handler_mp_mc);
> +MEMPOOL_REGISTER_HANDLER(handler_sp_sc);
> +MEMPOOL_REGISTER_HANDLER(handler_mp_sc);
> +MEMPOOL_REGISTER_HANDLER(handler_sp_mc);
> diff --git a/lib/librte_mempool/rte_mempool_handler.c b/lib/librte_mempool/rte_mempool_handler.c
> new file mode 100644
> index 0000000..78611f8
> --- /dev/null
> +++ b/lib/librte_mempool/rte_mempool_handler.c
> @@ -0,0 +1,139 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2016 Intel Corporation. All rights reserved.
> + *   Copyright(c) 2016 6WIND S.A.
> + *   All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#include <stdio.h>
> +#include <string.h>
> +
> +#include <rte_mempool.h>
> +
> +/* indirect jump table to support external memory pools */
> +struct rte_mempool_handler_table rte_mempool_handler_table = {
> +	.sl =  RTE_SPINLOCK_INITIALIZER ,
> +	.num_handlers = 0
> +};
> +
> +/* add a new handler in rte_mempool_handler_table, return its index */
> +int
> +rte_mempool_handler_register(struct rte_mempool_handler *h)
> +{
> +	struct rte_mempool_handler *handler;
> +	int16_t handler_idx;
> +
> +	rte_spinlock_lock(&rte_mempool_handler_table.sl);
> +
> +	if (rte_mempool_handler_table.num_handlers >= RTE_MEMPOOL_MAX_HANDLER_IDX) {
> +		rte_spinlock_unlock(&rte_mempool_handler_table.sl);
> +		RTE_LOG(ERR, MEMPOOL,
> +			"Maximum number of mempool handlers exceeded\n");
> +		return -ENOSPC;
> +	}
> +
> +	if (h->put == NULL || h->get == NULL || h->get_count == NULL) {
> +		rte_spinlock_unlock(&rte_mempool_handler_table.sl);
> +		RTE_LOG(ERR, MEMPOOL,
> +			"Missing callback while registering mempool handler\n");
> +		return -EINVAL;
> +	}
> +
> +	handler_idx = rte_mempool_handler_table.num_handlers++;
> +	handler = &rte_mempool_handler_table.handler[handler_idx];
> +	snprintf(handler->name, sizeof(handler->name), "%s", h->name);
> +	handler->alloc = h->alloc;
> +	handler->put = h->put;
> +	handler->get = h->get;
> +	handler->get_count = h->get_count;
> +
> +	rte_spinlock_unlock(&rte_mempool_handler_table.sl);
> +
> +	return handler_idx;
> +}
> +
> +/* wrapper to allocate an external pool handler */
> +void *
> +rte_mempool_ext_alloc(struct rte_mempool *mp)
> +{
> +	struct rte_mempool_handler *handler;
> +
> +	handler = rte_mempool_handler_get(mp->handler_idx);
> +	if (handler->alloc == NULL)
> +		return NULL;
> +	return handler->alloc(mp);
> +}
> +
> +/* wrapper to free an external pool handler */
> +void
> +rte_mempool_ext_free(struct rte_mempool *mp)
> +{
> +	struct rte_mempool_handler *handler;
> +
> +	handler = rte_mempool_handler_get(mp->handler_idx);
> +	if (handler->free == NULL)
> +		return;
> +	return handler->free(mp);
> +}
> +
> +/* wrapper to get available objects in an external pool handler */
> +unsigned
> +rte_mempool_ext_get_count(const struct rte_mempool *mp)
> +{
> +	struct rte_mempool_handler *handler;
> +
> +	handler = rte_mempool_handler_get(mp->handler_idx);
> +	return handler->get_count(mp->pool);
> +}
> +
> +/* set the handler of a mempool */
> +int
> +rte_mempool_set_handler(struct rte_mempool *mp, const char *name)
> +{
> +	struct rte_mempool_handler *handler = NULL;
> +	unsigned i;
> +
> +	/* too late, the mempool is already populated */
> +	if (mp->flags & MEMPOOL_F_RING_CREATED)
> +		return -EEXIST;
> +
> +	for (i = 0; i < rte_mempool_handler_table.num_handlers; i++) {
> +		if (!strcmp(name, rte_mempool_handler_table.handler[i].name)) {
> +			handler = &rte_mempool_handler_table.handler[i];
> +			break;
> +		}
> +	}
> +
> +	if (handler == NULL)
> +		return -EINVAL;
> +
> +	mp->handler_idx = i;
> +	return 0;
> +}
> diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
> index f63461b..a0e9aed 100644
> --- a/lib/librte_mempool/rte_mempool_version.map
> +++ b/lib/librte_mempool/rte_mempool_version.map
> @@ -19,6 +19,8 @@ DPDK_2.0 {
>  DPDK_16.7 {
>  	global:
>  
> +	rte_mempool_handler_table;
> +
>  	rte_mempool_check_cookies;
>  	rte_mempool_obj_iter;
>  	rte_mempool_mem_iter;
> @@ -29,6 +31,8 @@ DPDK_16.7 {
>  	rte_mempool_populate_default;
>  	rte_mempool_populate_anon;
>  	rte_mempool_free;
> +	rte_mempool_set_handler;
> +	rte_mempool_handler_register;
>  
>  	local: *;
>  } DPDK_2.0;
> -- 
> 2.5.5
> 

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v5 1/3] mempool: support external handler
  2016-05-24 15:35           ` [PATCH v5 1/3] " Jerin Jacob
@ 2016-05-27  9:52             ` Hunt, David
  2016-05-27 10:33               ` Jerin Jacob
  0 siblings, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-05-27  9:52 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dev, olivier.matz, yuanhan.liu, pmatilai



On 5/24/2016 4:35 PM, Jerin Jacob wrote:
> On Thu, May 19, 2016 at 02:44:59PM +0100, David Hunt wrote:
>> +	/*
>> +	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
>> +	 * set the correct index into the handler table.
>> +	 */
>> +	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
>> +		rte_mempool_set_handler(mp, "ring_sp_sc");
>> +	else if (flags & MEMPOOL_F_SP_PUT)
>> +		rte_mempool_set_handler(mp, "ring_sp_mc");
>> +	else if (flags & MEMPOOL_F_SC_GET)
>> +		rte_mempool_set_handler(mp, "ring_mp_sc");
>> +	else
>> +		rte_mempool_set_handler(mp, "ring_mp_mc");
> IMO, We should decouple the implementation specific flags of _a_
> external pool manager implementation from the generic rte_mempool_create_empty
> function as going further when we introduce new flags for custom HW accelerated
> external pool manager then this common code will be bloated.

These flags are only there to maintain backward compatibility for the 
default handlers. I would not
envisage adding more flags to this, I would suggest just adding a new 
handler using the new API calls.
So I would not see this code growing much in the future.


>> +/** Structure storing the table of registered handlers. */
>> +struct rte_mempool_handler_table {
>> +	rte_spinlock_t sl;     /**< Spinlock for add/delete. */
>> +	uint32_t num_handlers; /**< Number of handlers in the table. */
>> +	/** Storage for all possible handlers. */
>> +	struct rte_mempool_handler handler[RTE_MEMPOOL_MAX_HANDLER_IDX];
>> +};
> add __rte_cache_aligned to this structure to avoid "handler" memory
> cacheline being shared with other variables

Will do.

>> +
>> +/** Array of registered handlers */
>> +extern struct rte_mempool_handler_table rte_mempool_handler_table;
>> +
>> +/**
>> + * @internal Get the mempool handler from its index.
>> + *
>> + * @param handler_idx
>> + *   The index of the handler in the handler table. It must be a valid
>> + *   index: (0 <= idx < num_handlers).
>> + * @return
>> + *   The pointer to the handler in the table.
>> + */
>> +static struct rte_mempool_handler *
> inline?

Will do.

>>   		/* How many do we require i.e. number to fill the cache + the request */
>> -		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
>> +		ret = rte_mempool_ext_get_bulk(mp,
> This makes inline function to a function pointer. Nothing wrong in
> that. However, Do you see any performance drop with "local cache" only
> use case?
>
> http://dpdk.org/dev/patchwork/patch/12993/

With the latest mempool manager patch (without 12933), I see no 
performance degradation on my Haswell machine.
However, when I apply patch 12933, I'm seeing a 200-300 kpps drop.

With 12933, the mempool_perf_autotest is showing 24% more 
enqueues/dequeues, but testpmd forwarding
traffic between 2 40Gig interfaces from a hardware traffic generator  
with one core doing the forwarding
is showing a drop of 200-300kpps.

Regards,
Dave.



---snip---

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v5 1/3] mempool: support external handler
  2016-05-27  9:52             ` Hunt, David
@ 2016-05-27 10:33               ` Jerin Jacob
  2016-05-27 14:44                 ` Hunt, David
  0 siblings, 1 reply; 238+ messages in thread
From: Jerin Jacob @ 2016-05-27 10:33 UTC (permalink / raw)
  To: Hunt, David; +Cc: dev, olivier.matz, yuanhan.liu, pmatilai

On Fri, May 27, 2016 at 10:52:42AM +0100, Hunt, David wrote:
> 
> 
> On 5/24/2016 4:35 PM, Jerin Jacob wrote:
> > On Thu, May 19, 2016 at 02:44:59PM +0100, David Hunt wrote:
> > > +	/*
> > > +	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
> > > +	 * set the correct index into the handler table.
> > > +	 */
> > > +	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> > > +		rte_mempool_set_handler(mp, "ring_sp_sc");
> > > +	else if (flags & MEMPOOL_F_SP_PUT)
> > > +		rte_mempool_set_handler(mp, "ring_sp_mc");
> > > +	else if (flags & MEMPOOL_F_SC_GET)
> > > +		rte_mempool_set_handler(mp, "ring_mp_sc");
> > > +	else
> > > +		rte_mempool_set_handler(mp, "ring_mp_mc");
> > IMO, We should decouple the implementation specific flags of _a_
> > external pool manager implementation from the generic rte_mempool_create_empty
> > function as going further when we introduce new flags for custom HW accelerated
> > external pool manager then this common code will be bloated.
> 
> These flags are only there to maintain backward compatibility for the
> default handlers. I would not
> envisage adding more flags to this, I would suggest just adding a new
> handler using the new API calls.
> So I would not see this code growing much in the future.

IMHO, For _each_ HW accelerated external pool manager we may need to introduce
specific flag to tune to specific use cases.i.e MEMPOOL_F_* flags for
this exiting pool manager implemented in SW. For instance, when we add
a new HW external pool manager we may need to add MEMPOOL_MYHW_DONT_FREE_ON_SEND
(just a random name) to achieve certain functionally.

So I propose let "unsigned flags" in mempool create to be the opaque type and each
external pool manager can define what it makes sense to that specific
pool manager as there is NO other means to configure the pool manager.

For instance, on HW accelerated pool manager, the flag MEMPOOL_F_SP_PUT may
not make much sense as it can work with MP without any additional
settings in HW.

So instead of adding these checks in common code, IMO, lets move this
to a pool manager specific "function pointer" function and invoke
the function pointer from generic mempool create function.

What do you think?

Jerin

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v5 1/3] mempool: support external handler
  2016-05-27 10:33               ` Jerin Jacob
@ 2016-05-27 14:44                 ` Hunt, David
  2016-05-30  9:41                   ` Jerin Jacob
  0 siblings, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-05-27 14:44 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dev, olivier.matz, yuanhan.liu, pmatilai



On 5/27/2016 11:33 AM, Jerin Jacob wrote:
> On Fri, May 27, 2016 at 10:52:42AM +0100, Hunt, David wrote:
>>
>> On 5/24/2016 4:35 PM, Jerin Jacob wrote:
>>> On Thu, May 19, 2016 at 02:44:59PM +0100, David Hunt wrote:
>>>> +	/*
>>>> +	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
>>>> +	 * set the correct index into the handler table.
>>>> +	 */
>>>> +	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
>>>> +		rte_mempool_set_handler(mp, "ring_sp_sc");
>>>> +	else if (flags & MEMPOOL_F_SP_PUT)
>>>> +		rte_mempool_set_handler(mp, "ring_sp_mc");
>>>> +	else if (flags & MEMPOOL_F_SC_GET)
>>>> +		rte_mempool_set_handler(mp, "ring_mp_sc");
>>>> +	else
>>>> +		rte_mempool_set_handler(mp, "ring_mp_mc");
>>> IMO, We should decouple the implementation specific flags of _a_
>>> external pool manager implementation from the generic rte_mempool_create_empty
>>> function as going further when we introduce new flags for custom HW accelerated
>>> external pool manager then this common code will be bloated.
>> These flags are only there to maintain backward compatibility for the
>> default handlers. I would not
>> envisage adding more flags to this, I would suggest just adding a new
>> handler using the new API calls.
>> So I would not see this code growing much in the future.
> IMHO, For _each_ HW accelerated external pool manager we may need to introduce
> specific flag to tune to specific use cases.i.e MEMPOOL_F_* flags for
> this exiting pool manager implemented in SW. For instance, when we add
> a new HW external pool manager we may need to add MEMPOOL_MYHW_DONT_FREE_ON_SEND
> (just a random name) to achieve certain functionally.
>
> So I propose let "unsigned flags" in mempool create to be the opaque type and each
> external pool manager can define what it makes sense to that specific
> pool manager as there is NO other means to configure the pool manager.
>
> For instance, on HW accelerated pool manager, the flag MEMPOOL_F_SP_PUT may
> not make much sense as it can work with MP without any additional
> settings in HW.
>
> So instead of adding these checks in common code, IMO, lets move this
> to a pool manager specific "function pointer" function and invoke
> the function pointer from generic mempool create function.
>
> What do you think?
>
> Jerin

Jerin,
      That chunk of code above would be better moved all right. I'd 
suggest moving it to the
rte_mempool_create function, as that's the one that needs the backward 
compatibility.

On the flags issue, each mempool handler can re-interpret the flags as 
needed. Maybe we
could use the upper half of the bits for different handlers, changing 
the meaning of the
bits depending on which handler is being set up. We can then keep the lower
half for bits that are common across all handlers? That way the user can 
just set the bits they
are interested in for that handler. Also, the alloc function has access 
to the flags, so maybe the
handler specific setup could be handled in the alloc function rather 
than adding a new function pointer?

Of course, that won't help if we need to pass in more data, in which 
case we'd probably need an
opaque data pointer somewhere. It would probably be most useful to pass 
it in with the
alloc, which may need the data. Any suggestions?

Regards,
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v5 1/3] mempool: support external handler
  2016-05-27 14:44                 ` Hunt, David
@ 2016-05-30  9:41                   ` Jerin Jacob
  2016-05-30 11:27                     ` Hunt, David
  0 siblings, 1 reply; 238+ messages in thread
From: Jerin Jacob @ 2016-05-30  9:41 UTC (permalink / raw)
  To: Hunt, David; +Cc: dev, olivier.matz, yuanhan.liu, pmatilai

On Fri, May 27, 2016 at 03:44:31PM +0100, Hunt, David wrote:
> 
> 
Hi David,
[snip]
>      That chunk of code above would be better moved all right. I'd suggest
> moving it to the
> rte_mempool_create function, as that's the one that needs the backward
> compatibility.

OK

> 
> On the flags issue, each mempool handler can re-interpret the flags as
> needed. Maybe we
> could use the upper half of the bits for different handlers, changing the
> meaning of the
> bits depending on which handler is being set up. We can then keep the lower
> half for bits that are common across all handlers? That way the user can

Common lower half bit in flags looks good.

> just set the bits they
> are interested in for that handler. Also, the alloc function has access to
> the flags, so maybe the
> handler specific setup could be handled in the alloc function rather than
> adding a new function pointer?

Yes. I agree.

> 
> Of course, that won't help if we need to pass in more data, in which case
> we'd probably need an
> opaque data pointer somewhere. It would probably be most useful to pass it
> in with the
> alloc, which may need the data. Any suggestions?

But the top level rte_mempool_create() function needs to pass the data. Right?
That would be an ABI change. IMO, we need to start thinking about
passing a struct of config data to rte_mempool_create to create
backward compatibility on new argument addition to rte_mempool_create()

Other points in HW assisted pool manager perspective,

1) May be RING can be replaced with some other higher abstraction name
for the internal MEMPOOL_F_RING_CREATED flag
2) IMO, It is better to change void *pool in struct rte_mempool to
anonymous union type, something like below, so that mempool
implementation can choose the best type.
	union {
		void *pool;
		uint64_t val;
	}

3) int32_t handler_idx creates 4 byte hole in struct rte_mempool in
64 bit system. IMO it better to rearrange.(as const struct rte_memzone
*mz comes next)

4) IMO, It is better to change ext_alloc/ext_free to ext_create/ext_destroy
as their is no allocation in HW assisted pool manager case,
it will be mostly creating some HW initialization.

> 
> Regards,
> Dave.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v5 1/3] mempool: support external handler
  2016-05-30  9:41                   ` Jerin Jacob
@ 2016-05-30 11:27                     ` Hunt, David
  2016-05-31  8:53                       ` Jerin Jacob
  0 siblings, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-05-30 11:27 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dev, olivier.matz, yuanhan.liu, pmatilai



On 5/30/2016 10:41 AM, Jerin Jacob wrote:
--snip--
>> Of course, that won't help if we need to pass in more data, in which case
>> we'd probably need an
>> opaque data pointer somewhere. It would probably be most useful to pass it
>> in with the
>> alloc, which may need the data. Any suggestions?
> But the top level rte_mempool_create() function needs to pass the data. Right?
> That would be an ABI change. IMO, we need to start thinking about
> passing a struct of config data to rte_mempool_create to create
> backward compatibility on new argument addition to rte_mempool_create()

New mempool handlers will use rte_mempool_create_empty(), 
rte_mempool_set_handler(),
then rte_mempool_populate_*(). These three functions are new to this 
release, to no problem
to add a parameter to one of them for the config data. Also since we're 
adding some new
elements to the mempool structure, how about we add a new pointer for a 
void pointer to a
config data structure, as defined by the handler.

So, new element in rte_mempool struct alongside the *pool
     void *pool;
     void *pool_config;

Then add a param to the rte_mempool_set_handler function:
int
rte_mempool_set_handler(struct rte_mempool *mp, const char *name, void 
*pool_config)

The function would simply set the pointer in the mempool struct, and the 
custom handler
alloc/create function would use as apporopriate as needed. Handlers that 
do not need this
data can be passed NULL.


> Other points in HW assisted pool manager perspective,
>
> 1) May be RING can be replaced with some other higher abstraction name
> for the internal MEMPOOL_F_RING_CREATED flag

Agreed. I'll change to MEMPOOL_F_POOL_CREATED, since we're already 
changing the *ring
element of the mempool struct to *pool

> 2) IMO, It is better to change void *pool in struct rte_mempool to
> anonymous union type, something like below, so that mempool
> implementation can choose the best type.
> 	union {
> 		void *pool;
> 		uint64_t val;
> 	}

Could we do this by using the union for the *pool_config suggested 
above, would that give
you what you need?

> 3) int32_t handler_idx creates 4 byte hole in struct rte_mempool in
> 64 bit system. IMO it better to rearrange.(as const struct rte_memzone
> *mz comes next)
OK, Will look at this.

> 4) IMO, It is better to change ext_alloc/ext_free to ext_create/ext_destroy
> as their is no allocation in HW assisted pool manager case,
> it will be mostly creating some HW initialization.

OK, I'll change. I think that makes more sense.


Regards,
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v5 1/3] mempool: support external handler
  2016-05-30 11:27                     ` Hunt, David
@ 2016-05-31  8:53                       ` Jerin Jacob
  2016-05-31 15:37                         ` Hunt, David
  0 siblings, 1 reply; 238+ messages in thread
From: Jerin Jacob @ 2016-05-31  8:53 UTC (permalink / raw)
  To: Hunt, David; +Cc: dev, olivier.matz, yuanhan.liu, pmatilai

On Mon, May 30, 2016 at 12:27:26PM +0100, Hunt, David wrote:
> 
> New mempool handlers will use rte_mempool_create_empty(),
> rte_mempool_set_handler(),
> then rte_mempool_populate_*(). These three functions are new to this
> release, to no problem

Having separate APIs for external pool-manager create is worrisome in
application perspective. Is it possible to have rte_mempool_[xmem]_create
for the both external and existing SW pool manager and make
rte_mempool_create_empty and rte_mempool_populate_*  internal functions.

IMO, We can do that by selecting  specific rte_mempool_set_handler()
based on _flags_ encoding, something like below

bit 0 - 16   // generic bits uses across all the pool managers
bit 16 - 23  // pool handler specific flags bits
bit 24 - 31  // to select the specific pool manager(Up to 256 different flavors of
pool managers, For backward compatibility, make '0'(in 24-31) to select
existing SW pool manager.

and applications can choose the handlers by selecting the flag in
rte_mempool_[xmem]_create, That way it will be easy in testpmd or any other
applications to choose the pool handler from command line etc in future.

and we can remove "mbuf: get default mempool handler from configuration"
change-set OR just add if RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is defined then set
the same with rte_mempool_set_handler in rte_mempool_[xmem]_create.

What do you think?

> to add a parameter to one of them for the config data. Also since we're
> adding some new
> elements to the mempool structure, how about we add a new pointer for a void
> pointer to a
> config data structure, as defined by the handler.
> 
> So, new element in rte_mempool struct alongside the *pool
>     void *pool;
>     void *pool_config;
> 
> Then add a param to the rte_mempool_set_handler function:
> int
> rte_mempool_set_handler(struct rte_mempool *mp, const char *name, void
> *pool_config)

IMO, Maybe we need to have _set_ and _get_.So I think we can have
two separate callback in external pool-manger for that if required.
IMO, For now, We can live with pool manager specific 8 bits(bit 16 -23)
for the configuration as mentioned above and add the new callbacks for
set and get when required?

> > 2) IMO, It is better to change void *pool in struct rte_mempool to
> > anonymous union type, something like below, so that mempool
> > implementation can choose the best type.
> > 	union {
> > 		void *pool;
> > 		uint64_t val;
> > 	}
> 
> Could we do this by using the union for the *pool_config suggested above,
> would that give
> you what you need?

It would be an extra overhead for external pool manager to _alloc_ memory
and store the allocated pointer in mempool struct(as *pool) and use pool for
pointing other data structures as some implementation need only
limited bytes to store the external pool manager specific context.

In order to fix this problem, We may classify fast path and slow path
elements in struct rte_mempool and move all fast path elements in first
cache line and create an empty opaque space in the remaining bytes in the
cache line so that if the external pool manager needs only limited space
then it is not required to allocate the separate memory to save the
per core cache  in fast-path

something like below,
union {
	void *pool;
	uint64_t val;
	uint8_t extra_mem[16] // available free bytes in fast path cache line

}

Other points,

1) Is it possible to remove unused is_mp in  __mempool_put_bulk
function as it is just a internal function.

2) Considering "get" and "put" are the fast-path callbacks for
pool-manger, Is it possible to avoid the extra overhead of the following
_load_ and additional cache line on each call,
rte_mempool_handler_table.handler[handler_idx]

I understand it is for multiprocess support but I am thing can we
introduce something like ethernet API support for multiprocess and
resolve "put" and "get" functions pointer on init and store in
struct mempool. Some thinking like

file: drivers/net/ixgbe/ixgbe_ethdev.c
search for if (rte_eal_process_type() != RTE_PROC_PRIMARY) {

Jerin

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [dpdk-dev,v5,1/3] mempool: support external handler
  2016-05-23 12:35           ` [dpdk-dev,v5,1/3] " Jan Viktorin
  2016-05-24 14:04             ` Hunt, David
@ 2016-05-31  9:09             ` Hunt, David
  2016-05-31 12:06               ` Jan Viktorin
  1 sibling, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-05-31  9:09 UTC (permalink / raw)
  To: Jan Viktorin; +Cc: dev, olivier.matz, yuanhan.liu, pmatilai

Hi Jan,

On 5/23/2016 1:35 PM, Jan Viktorin wrote:
>> Until now, the objects stored in mempool mempool were internally stored a
> s/mempool mempool/mempool/
>
> stored _in_ a ring?

Fixed.

>
> @@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
>   							   n_get_bulk);
>   				if (unlikely(ret < 0)) {
>   					rte_mempool_dump(stdout, mp);
> -					rte_ring_dump(stdout, mp->ring);
>   					/* in this case, objects are lost... */
>   					return -1;
>   				}
> I think, this should be in a separate patch explaining the reason to remove it.

Done. Moved.

>> diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
>> index 43423e0..f19366e 100644
>> --- a/lib/librte_mempool/Makefile
>> +++ b/lib/librte_mempool/Makefile
>> @@ -42,6 +42,8 @@ LIBABIVER := 2
>>   
>>   # all source are stored in SRCS-y
>>   SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
>> +SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_handler.c
>> +SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
>>   # install includes
>>   SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
>>   
>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
>> index 1ab6701..6ec2b3f 100644
>> --- a/lib/librte_mempool/rte_mempool.c
>> +++ b/lib/librte_mempool/rte_mempool.c
>> @@ -148,7 +148,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, phys_addr_t physaddr)
>>   #endif
>>   
>>   	/* enqueue in ring */
>> -	rte_ring_sp_enqueue(mp->ring, obj);
>> +	rte_mempool_ext_put_bulk(mp, &obj, 1);
> I suppose this is OK, however, replacing "enqueue" by "put" (semantically) sounds to me
> like a bug. Enqueue is put into a queue. Put is to drop a reference.

Yes, Makes sense. Changed  'put' and 'get' to 'enqueue' and 'dequeue'

>
>>   
>> -/* create the internal ring */
>> -static int
>> -rte_mempool_ring_create(struct rte_mempool *mp)
>> -{
>> -	int rg_flags = 0, ret;
>> -	char rg_name[RTE_RING_NAMESIZE];
>> -	struct rte_ring *r;
>> -
>> -	ret = snprintf(rg_name, sizeof(rg_name),
>> -		RTE_MEMPOOL_MZ_FORMAT, mp->name);
>> -	if (ret < 0 || ret >= (int)sizeof(rg_name))
>> -		return -ENAMETOOLONG;
>> -
>> -	/* ring flags */
>> -	if (mp->flags & MEMPOOL_F_SP_PUT)
>> -		rg_flags |= RING_F_SP_ENQ;
>> -	if (mp->flags & MEMPOOL_F_SC_GET)
>> -		rg_flags |= RING_F_SC_DEQ;
>> -
>> -	/* Allocate the ring that will be used to store objects.
>> -	 * Ring functions will return appropriate errors if we are
>> -	 * running as a secondary process etc., so no checks made
>> -	 * in this function for that condition.
>> -	 */
>> -	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
>> -		mp->socket_id, rg_flags);
>> -	if (r == NULL)
>> -		return -rte_errno;
>> -
>> -	mp->ring = r;
>> -	mp->flags |= MEMPOOL_F_RING_CREATED;
>> -	return 0;
>> -}
> This is a big change. I suggest (if possible) to make a separate patch with
> something like "replace rte_mempool_ring_create by ...". Where is this code
> placed now?

The code is not gone away, it's now part of the default handler, which 
uses a ring. It's
in rte_mempool_default.c

>> -
>>   /* free a memchunk allocated with rte_memzone_reserve() */
>>   static void
>>   rte_mempool_memchunk_mz_free(__rte_unused struct rte_mempool_memhdr *memhdr,
>> @@ -351,7 +317,7 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
>>   	void *elt;
>>   
>>   	while (!STAILQ_EMPTY(&mp->elt_list)) {
>> -		rte_ring_sc_dequeue(mp->ring, &elt);
>> +		rte_mempool_ext_get_bulk(mp, &elt, 1);
> Similar as for put_bulk... Replacing "dequeue" by "get" (semantically) sounds to me
> like a bug. Dequeue is drop from a queue. Get is to obtain a reference.

Done

>>   		(void)elt;
>>   		STAILQ_REMOVE_HEAD(&mp->elt_list, next);
>>   		mp->populated_size--;
>> @@ -380,15 +346,18 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
>>   	unsigned i = 0;
>>   	size_t off;
>>   	struct rte_mempool_memhdr *memhdr;
>> -	int ret;
>>   
>>   	/* create the internal ring if not already done */
>>   	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
>> -		ret = rte_mempool_ring_create(mp);
>> -		if (ret < 0)
>> -			return ret;
>> +		rte_errno = 0;
>> +		mp->pool = rte_mempool_ext_alloc(mp);
>> +		if (mp->pool == NULL) {
>> +			if (rte_errno == 0)
>> +				return -EINVAL;
>> +			else
>> +				return -rte_errno;
>> +		}
>>   	}
>> -
> Is this a whitespace change?

Accidental. Reverted


>> +
>> +	/*
>> +	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
>> +	 * set the correct index into the handler table.
>> +	 */
>> +	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
>> +		rte_mempool_set_handler(mp, "ring_sp_sc");
>> +	else if (flags & MEMPOOL_F_SP_PUT)
>> +		rte_mempool_set_handler(mp, "ring_sp_mc");
>> +	else if (flags & MEMPOOL_F_SC_GET)
>> +		rte_mempool_set_handler(mp, "ring_mp_sc");
>> +	else
>> +		rte_mempool_set_handler(mp, "ring_mp_mc");
>> +
> Do I understand it well that this code preserves behaviour of the previous API?
> Because otherwise it looks strange.

Yes. it's just to keep backward compatibility. It will also move to 
somewhere more sensible
in the latest patch, rte_mempool_create rather than 
rte_mempool_create_empty.

>>   struct rte_mempool {
>>   	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
>> -	struct rte_ring *ring;           /**< Ring to store objects. */
>> +	void *pool;                      /**< Ring or ext-pool to store objects. */
>> +	/**
>> +	 * Index into the array of structs containing callback fn pointers.
>> +	 * We're using an index here rather than pointers to the callbacks
>> +	 * to facilitate any secondary processes that may want to use
>> +	 * this mempool. Any function pointers stored in the mempool
>> +	 * directly would not be valid for secondary processes.
>> +	 */
> I think, this comment should go to the rte_mempool_handler_table definition
> leaving a here a short note about it.

I've added a comment to rte_mempool_handler_table, and tweaked this 
comment somewhat.

>> +	int32_t handler_idx;
>>   	const struct rte_memzone *mz;    /**< Memzone where pool is allocated */
>>   	int flags;                       /**< Flags of the mempool. */
>>   	int socket_id;                   /**< Socket id passed at mempool creation. */
>> @@ -325,6 +334,175 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
>>   #define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
>>   #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
>>   
>> +#define RTE_MEMPOOL_HANDLER_NAMESIZE 32 /**< Max length of handler name. */
>> +
>> +/** Allocate the external pool. */

Note: for the next few comments you switched to commenting above the 
code , I've moved
the comments to below the code, and replied.

>> +typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp);
>> +
>> +/** Free the external pool. */
> What is the purpose of this callback?
> What exactly does it allocate?
> Some rte_mempool internals?
> Or the memory?
> What does it return?

This is the main allocate function of the handler. It is up to the 
mempool handlers control.
The handler's alloc function does whatever it needs to do to grab memory 
for this handler, and places
a pointer to it in the *pool opaque pointer in the rte_mempool struct. 
In the default handler, *pool
points to a ring, in other handlers, it will mostlikely point to a 
different type of data structure. It will
be transparent to the application programmer.

>> +typedef void (*rte_mempool_free_t)(void *p);
>> +
>> +/** Put an object in the external pool. */
> Why this *_free callback does not accept the rte_mempool param?
>

We're freeing the pool opaque data here.


>> +typedef int (*rte_mempool_put_t)(void *p, void * const *obj_table, unsigned n);
> What is the *p pointer?
> What is the obj_table?
> Why is it void *?
> Why is it const?
>

The *p pointer is the opaque data for a given mempool handler (ring, 
array, linked list, etc)

> Probably, "unsigned int n" is better.
>
>> +
>> +/** Get an object from the external pool. */
>> +typedef int (*rte_mempool_get_t)(void *p, void **obj_table, unsigned n);
> Probably, "unsigned int n" is better.

Done.

>> +
>> +/** Return the number of available objects in the external pool. */
> What is the purpose of the *_get_count callback? I guess it can introduce
> race conditions...

I think it depends on the implementation of the handlers get_count 
function, it must
ensure not to return values greater than the size of the mempool.

>> +typedef unsigned (*rte_mempool_get_count)(void *p);
> unsigned int

Sure.

>> +
>> +/** Structure defining a mempool handler. */
> Later in the text, I suggested to rename rte_mempool_handler to rte_mempool_ops.
> I believe that it explains the purpose of this struct better. It would improve
> consistency in function names (the *_ext_* mark is very strange and inconsistent).

I agree. I've gone through all the code and renamed to 
rte_mempool_handler_ops.

>> +struct rte_mempool_handler {
>> +	char name[RTE_MEMPOOL_HANDLER_NAMESIZE]; /**< Name of mempool handler */
>> +	rte_mempool_alloc_t alloc;       /**< Allocate the external pool. */
>> +	rte_mempool_free_t free;         /**< Free the external pool. */
>> +	rte_mempool_put_t put;           /**< Put an object. */
>> +	rte_mempool_get_t get;           /**< Get an object. */
>> +	rte_mempool_get_count get_count; /**< Get the number of available objs. */
>> +} __rte_cache_aligned;
>> +
>> +#define RTE_MEMPOOL_MAX_HANDLER_IDX 16  /**< Max number of registered handlers */
>> +
>> +/** Structure storing the table of registered handlers. */
>> +struct rte_mempool_handler_table {
>> +	rte_spinlock_t sl;     /**< Spinlock for add/delete. */
>> +	uint32_t num_handlers; /**< Number of handlers in the table. */
>> +	/** Storage for all possible handlers. */
>> +	struct rte_mempool_handler handler[RTE_MEMPOOL_MAX_HANDLER_IDX];
>> +};
> The handlers are implemented as an array due to multi-process access.
> Is it correct? I'd expect a note about it here.

Yes, you are correct. I've improved the comments.

>> +
>> +/** Array of registered handlers */
>> +extern struct rte_mempool_handler_table rte_mempool_handler_table;
>> +
>> +/**
>> + * @internal Get the mempool handler from its index.
>> + *
>> + * @param handler_idx
>> + *   The index of the handler in the handler table. It must be a valid
>> + *   index: (0 <= idx < num_handlers).
>> + * @return
>> + *   The pointer to the handler in the table.
>> + */
>> +static struct rte_mempool_handler *
>> +rte_mempool_handler_get(int handler_idx)
>> +{
>> +	return &rte_mempool_handler_table.handler[handler_idx];
> Is it always safe? Can we belive the handler_idx is inside the boundaries?
> At least some RTE_VERIFY would be nice here...

Agreed. Added:
RTE_VERIFY(handler_idx < RTE_MEMPOOL_MAX_HANDLER_IDX);


>> +}
>> +
>> +/**
>> + * @internal wrapper for external mempool manager alloc callback.
>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + * @return
>> + *   The opaque pointer to the external pool.
>> + */
>> +void *
>> +rte_mempool_ext_alloc(struct rte_mempool *mp);
>> +
>> +/**
>> + * @internal wrapper for external mempool manager get callback.
>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + * @param obj_table
>> + *   Pointer to a table of void * pointers (objects).
>> + * @param n
>> + *   Number of objects to get.
>> + * @return
>> + *   - 0: Success; got n objects.
>> + *   - <0: Error; code of handler get function.
> Should this doc be more specific about the possible failures?

This is up to the handler. We cannot know what codes will be returned at 
this stage.

>> + */
>> +static inline int
>> +rte_mempool_ext_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
>> +{
>> +	struct rte_mempool_handler *handler;
>> +
>> +	handler = rte_mempool_handler_get(mp->handler_idx);
>> +	return handler->get(mp->pool, obj_table, n);
>> +}
>> +
>> +/**
>> + * @internal wrapper for external mempool manager put callback.
>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + * @param obj_table
>> + *   Pointer to a table of void * pointers (objects).
>> + * @param n
>> + *   Number of objects to put.
>> + * @return
>> + *   - 0: Success; n objects supplied.
>> + *   - <0: Error; code of handler put function.
> Should this doc be more specific about the possible failures?

This is up to the handler. We cannot know what codes will be returned at 
this stage.

>> + */
>> +static inline int
>> +rte_mempool_ext_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>> +		unsigned n)
>> +{
>> +	struct rte_mempool_handler *handler;
>> +
>> +	handler = rte_mempool_handler_get(mp->handler_idx);
>> +	return handler->put(mp->pool, obj_table, n);
>> +}
>> +
>> +/**
>> + * @internal wrapper for external mempool manager get_count callback.
>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + * @return
>> + *   The number of available objects in the external pool.
>> + */
>> +unsigned
> unsigned int

Done.

>> +rte_mempool_ext_get_count(const struct rte_mempool *mp);
>> +
>> +/**
>> + * @internal wrapper for external mempool manager free callback.
>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + */
>> +void
>> +rte_mempool_ext_free(struct rte_mempool *mp);
>> +
>> +/**
>> + * Set the handler of a mempool
>> + *
>> + * This can only be done on a mempool that is not populated, i.e. just after
>> + * a call to rte_mempool_create_empty().
>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + * @param name
>> + *   Name of the handler.
>> + * @return
>> + *   - 0: Sucess; the new handler is configured.
>> + *   - <0: Error (errno)
> Should this doc be more specific about the possible failures?
>
> The body of rte_mempool_set_handler does not set errno at all.
> It returns e.g. -EEXIST.

This is up to the handler. We cannot know what codes will be returned at 
this stage.
errno was a cut-and paste error, this is now fixed.


>> + */
>> +int
>> +rte_mempool_set_handler(struct rte_mempool *mp, const char *name);
>> +
>> +/**
>> + * Register an external pool handler.
>> + *
>> + * @param h
>> + *   Pointer to the external pool handler
>> + * @return
>> + *   - >=0: Sucess; return the index of the handler in the table.
>> + *   - <0: Error (errno)
> Should this doc be more specific about the possible failures?

This is up to the handler. We cannot know what codes will be returned at 
this stage.
errno was a cut-and paste error, this is now fixed.

>> + */
>> +int rte_mempool_handler_register(struct rte_mempool_handler *h);
>> +
>> +/**
>> + * Macro to statically register an external pool handler.
>> + */
>> +#define MEMPOOL_REGISTER_HANDLER(h)					\
>> +	void mp_hdlr_init_##h(void);					\
>> +	void __attribute__((constructor, used)) mp_hdlr_init_##h(void)	\
>> +	{								\
>> +		rte_mempool_handler_register(&h);			\
>> +	}
>> +
> There might be a little catch. If there is no more room for handlers, calling the
> rte_mempool_handler_register would fail silently as the error reporting does not
> work when calling a constructor (or at least, this is my experience).
>
> Not a big deal but...

I would hope that the developer would check this when adding new 
handlers. If there is no room
for new handlers, then their new one will never work...

>>   
>>   ring_enqueue:
>> -
>>   	/* push remaining objects in ring */
>> -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
>> -	if (is_mp) {
>> -		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
>> -			rte_panic("cannot put objects in mempool\n");
>> -	}
>> -	else {
>> -		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
>> -			rte_panic("cannot put objects in mempool\n");
>> -	}
>> -#else
>> -	if (is_mp)
>> -		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
>> -	else
>> -		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
>> -#endif
>> +	rte_mempool_ext_put_bulk(mp, obj_table, n);
> This is a big change. Does it remove the RTE_LIBRTE_MEMPOOL_DEBUG config option
> entirely? If so, I suggest to first do this in a separated patch and then
> replace the original *_enqueue_bulk by your *_ext_put_bulk (or better *_ops_put_bulk
> as I explain below).

Well spotted. I have reverted this change. The mp_enqueue and sp_enqueue 
have been replaced with the
handlers version, and I've added back in the DEBUG check with the rte_panic.

>>   
>> +
>> +static struct rte_mempool_handler handler_mp_mc = {
>> +	.name = "ring_mp_mc",
>> +	.alloc = common_ring_alloc,
>> +	.free = common_ring_free,
>> +	.put = common_ring_mp_put,
>> +	.get = common_ring_mc_get,
>> +	.get_count = common_ring_get_count,
>> +};
>> +
>> +static struct rte_mempool_handler handler_sp_sc = {
>> +	.name = "ring_sp_sc",
>> +	.alloc = common_ring_alloc,
>> +	.free = common_ring_free,
>> +	.put = common_ring_sp_put,
>> +	.get = common_ring_sc_get,
>> +	.get_count = common_ring_get_count,
>> +};
>> +
>> +static struct rte_mempool_handler handler_mp_sc = {
>> +	.name = "ring_mp_sc",
>> +	.alloc = common_ring_alloc,
>> +	.free = common_ring_free,
>> +	.put = common_ring_mp_put,
>> +	.get = common_ring_sc_get,
>> +	.get_count = common_ring_get_count,
>> +};
>> +
>> +static struct rte_mempool_handler handler_sp_mc = {
>> +	.name = "ring_sp_mc",
>> +	.alloc = common_ring_alloc,
>> +	.free = common_ring_free,
>> +	.put = common_ring_sp_put,
>> +	.get = common_ring_mc_get,
>> +	.get_count = common_ring_get_count,
>> +};
>> +
> Introducing those handlers can go as a separate patch. IMHO, that would simplify
> the review process a lot. First introduce the mechanism, then add something
> inside.
>
> I'd also note that those handlers are always available and what kind of memory
> do they use...

Done. Now added as a separate patch.

They don't use any memory unless they are used.


>> +#include <stdio.h>
>> +#include <string.h>
>> +
>> +#include <rte_mempool.h>
>> +
>> +/* indirect jump table to support external memory pools */
>> +struct rte_mempool_handler_table rte_mempool_handler_table = {
>> +	.sl =  RTE_SPINLOCK_INITIALIZER ,
>> +	.num_handlers = 0
>> +};
>> +
>> +/* add a new handler in rte_mempool_handler_table, return its index */
> It seems to me that there is no way how to put some opaque pointer into the
> handler. In such case I would expect I can do something like:
>
> struct my_handler {
> 	struct rte_mempool_handler h;
> 	...
> } handler;
>
> rte_mempool_handler_register(&handler.h);
>
> But I cannot because you copy the contents of the handler. By the way, this
> should be documented.
>
> How can I pass an opaque pointer here? The only way I see is through the
> rte_mempool.pool.

I think have addressed this in a later patch, in the discussions with 
Jerin on the list. But
rather than passing data at register time, I pass it at create time (or 
rather set_handler).
And a new config void has been added to the mempool struct for this 
purpose.

>   In that case, what about renaming the rte_mempool_handler
> to rte_mempool_ops? Because semantically, it is not a handler, it just holds
> the operations.
>
> This would improve some namings:
>
> rte_mempool_ext_alloc -> rte_mempool_ops_alloc
> rte_mempool_ext_free -> rte_mempool_ops_free
> rte_mempool_ext_get_count -> rte_mempool_ops_get_count
> rte_mempool_handler_register -> rte_mempool_ops_register
>
> seems to be more readable to me. The *_ext_* mark does not say anything valuable.
> It just scares a bit :).

Agreed. Makes sense. The ext was intended to be 'external', but that's a 
bit too generic, and not
very intuitive. the 'ops' tag seems better to me. I've change this in 
the latest patch.

>> +/* wrapper to get available objects in an external pool handler */
>> +unsigned
>> +rte_mempool_ext_get_count(const struct rte_mempool *mp)
>> +{
>> +	struct rte_mempool_handler *handler;
>> +
>> +	handler = rte_mempool_handler_get(mp->handler_idx);
>> +	return handler->get_count(mp->pool);
>> +}
>> +
>> +/* set the handler of a mempool */
> The doc comment should say "this sets a handler previously registered by
> the rte_mempool_handler_register function ...". I was confused and didn't
> understand how the handlers are inserted into the table.

Done.

--snip--

> Regards
> Jan

Thanks, Jan. Very comprehensive.  I'll hopefully be pushing the latest 
patch to the list later today (Tuesday 31st)

Regard,
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [dpdk-dev, v5, 2/3] app/test: test external mempool handler
  2016-05-23 12:45           ` [dpdk-dev, v5, " Jan Viktorin
@ 2016-05-31  9:17             ` Hunt, David
  2016-05-31 12:14               ` Jan Viktorin
  0 siblings, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-05-31  9:17 UTC (permalink / raw)
  To: Jan Viktorin; +Cc: dev, olivier.matz, yuanhan.liu, pmatilai

Hi Jan,

On 5/23/2016 1:45 PM, Jan Viktorin wrote:
> On Thu, 19 May 2016 14:45:00 +0100
> David Hunt <david.hunt@intel.com> wrote:

--snip--

>> + * Loop though all the element pointers and allocate a chunk of memory, then
> s/though/through/

Fixed.

>> +static struct rte_mempool_handler mempool_handler_custom = {
>> +	.name = "custom_handler",
>> +	.alloc = custom_mempool_alloc,
>> +	.free = custom_mempool_free,
>> +	.put = custom_mempool_put,
>> +	.get = custom_mempool_get,
>> +	.get_count = custom_mempool_get_count,
>> +};
>> +
>> +MEMPOOL_REGISTER_HANDLER(mempool_handler_custom);
> What about to drop the rte_mempool_handler.name field and derive the
> name from the variable name given to the MEMPOOL_REGISTER_HANDLER.
> The MEMPOOL_REGISTER_HANDLER sould do some macro magic inside and call
>
>    rte_mempool_handler_register(name, handler);
>
> Just an idea...

Lets see if anyone else has any strong opinions on this :)


>> +
>> +/*
>>    * save the object number in the first 4 bytes of object data. All
>>    * other bytes are set to 0.
>>    */
>> @@ -479,6 +569,7 @@ test_mempool(void)
>>   {
>>   	struct rte_mempool *mp_cache = NULL;
>>   	struct rte_mempool *mp_nocache = NULL;
>> +	struct rte_mempool *mp_ext = NULL;
>>   
>>   	rte_atomic32_init(&synchro);
>>   
>> @@ -507,6 +598,27 @@ test_mempool(void)
>>   		goto err;
>>   	}
>>   
>> +	/* create a mempool with an external handler */
>> +	mp_ext = rte_mempool_create_empty("test_ext",
>> +		MEMPOOL_SIZE,
>> +		MEMPOOL_ELT_SIZE,
>> +		RTE_MEMPOOL_CACHE_MAX_SIZE, 0,
>> +		SOCKET_ID_ANY, 0);
>> +
>> +	if (mp_ext == NULL) {
>> +		printf("cannot allocate mp_ext mempool\n");
>> +		goto err;
>> +	}
>> +	if (rte_mempool_set_handler(mp_ext, "custom_handler") < 0) {
>> +		printf("cannot set custom handler\n");
>> +		goto err;
>> +	}
>> +	if (rte_mempool_populate_default(mp_ext) < 0) {
>> +		printf("cannot populate mp_ext mempool\n");
>> +		goto err;
>> +	}
>> +	rte_mempool_obj_iter(mp_ext, my_obj_init, NULL);
>> +
> The test becomes quite complex. What about having several smaller
> tests with a clear setup and cleanup steps?

I guess that's something we can look at in the future. For the moment 
can we leave it?

Thanks,
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [dpdk-dev, v5, 3/3] mbuf: get default mempool handler from configuration
  2016-05-23 12:40           ` [dpdk-dev, v5, " Jan Viktorin
@ 2016-05-31  9:26             ` Hunt, David
  0 siblings, 0 replies; 238+ messages in thread
From: Hunt, David @ 2016-05-31  9:26 UTC (permalink / raw)
  To: Jan Viktorin; +Cc: dev, olivier.matz, yuanhan.liu, pmatilai



On 5/23/2016 1:40 PM, Jan Viktorin wrote:
> On Thu, 19 May 2016 14:45:01 +0100
> David Hunt <david.hunt@intel.com> wrote:

--snip--

>> +	mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
>> +		 sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
>> +	if (mp == NULL)
>> +		return NULL;
>> +
>> +	rte_mempool_set_handler(mp, RTE_MBUF_DEFAULT_MEMPOOL_HANDLER);
> Check for a failure is missing here. Especially -EEXIST.

Done.

--snip--


Thanks,
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [dpdk-dev,v5,1/3] mempool: support external handler
  2016-05-31  9:09             ` Hunt, David
@ 2016-05-31 12:06               ` Jan Viktorin
  2016-05-31 13:47                 ` Hunt, David
  0 siblings, 1 reply; 238+ messages in thread
From: Jan Viktorin @ 2016-05-31 12:06 UTC (permalink / raw)
  To: Hunt, David; +Cc: dev, olivier.matz, yuanhan.liu, pmatilai

On Tue, 31 May 2016 10:09:42 +0100
"Hunt, David" <david.hunt@intel.com> wrote:

> Hi Jan,
> 

[...]

> 
> >> +typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp);
> >> +
> >> +/** Free the external pool. */  
> > What is the purpose of this callback?
> > What exactly does it allocate?
> > Some rte_mempool internals?
> > Or the memory?
> > What does it return?  
> 
> This is the main allocate function of the handler. It is up to the 
> mempool handlers control.
> The handler's alloc function does whatever it needs to do to grab memory 
> for this handler, and places
> a pointer to it in the *pool opaque pointer in the rte_mempool struct. 
> In the default handler, *pool
> points to a ring, in other handlers, it will mostlikely point to a 
> different type of data structure. It will
> be transparent to the application programmer.

Thanks for explanation. Please, add doc comments there.

Should the *mp be const here? It is not clear to me whether the callback is
expected to modify the mempool or not (eg. set the pool pointer).

> 
> >> +typedef void (*rte_mempool_free_t)(void *p);
> >> +
> >> +/** Put an object in the external pool. */  
> > Why this *_free callback does not accept the rte_mempool param?
> >  
> 
> We're freeing the pool opaque data here.

Add doc comments...

> 
> 
> >> +typedef int (*rte_mempool_put_t)(void *p, void * const *obj_table, unsigned n);  
> > What is the *p pointer?
> > What is the obj_table?
> > Why is it void *?
> > Why is it const?
> >  
> 
> The *p pointer is the opaque data for a given mempool handler (ring, 
> array, linked list, etc)

Again, doc comments...

I don't like the obj_table representation to be an array of void *. I could see
it already in DPDK for defining Ethernet driver queues, so, it's probably not
an issue. I just say, I would prefer some basic type safety like

struct rte_mempool_obj {
	void *p;
};

Is there somebody with different opinions?

[...]

> >> +
> >> +/** Structure defining a mempool handler. */  
> > Later in the text, I suggested to rename rte_mempool_handler to rte_mempool_ops.
> > I believe that it explains the purpose of this struct better. It would improve
> > consistency in function names (the *_ext_* mark is very strange and inconsistent).  
> 
> I agree. I've gone through all the code and renamed to 
> rte_mempool_handler_ops.

Ok. I meant rte_mempool_ops because I find the word "handler" to be redundant.

> 

[...]

> >> +/**
> >> + * Set the handler of a mempool
> >> + *
> >> + * This can only be done on a mempool that is not populated, i.e. just after
> >> + * a call to rte_mempool_create_empty().
> >> + *
> >> + * @param mp
> >> + *   Pointer to the memory pool.
> >> + * @param name
> >> + *   Name of the handler.
> >> + * @return
> >> + *   - 0: Sucess; the new handler is configured.
> >> + *   - <0: Error (errno)  
> > Should this doc be more specific about the possible failures?
> >
> > The body of rte_mempool_set_handler does not set errno at all.
> > It returns e.g. -EEXIST.  
> 
> This is up to the handler. We cannot know what codes will be returned at 
> this stage.
> errno was a cut-and paste error, this is now fixed.

I don't think so. The rte_mempool_set_handler is not handler-specific:

116 /* set the handler of a mempool */
117 int
118 rte_mempool_set_handler(struct rte_mempool *mp, const char *name)
119 {
120         struct rte_mempool_handler *handler = NULL;
121         unsigned i;
122
123         /* too late, the mempool is already populated */
124         if (mp->flags & MEMPOOL_F_RING_CREATED)
125                 return -EEXIST;

Here, it returns -EEXIST.

126
127         for (i = 0; i < rte_mempool_handler_table.num_handlers; i++) {
128                 if (!strcmp(name, rte_mempool_handler_table.handler[i].name)) {
129                         handler = &rte_mempool_handler_table.handler[i];
130                         break;
131                 }
132         }
133
134         if (handler == NULL)
135                 return -EINVAL;

And here, it returns -EINVAL.

136
137         mp->handler_idx = i;
138         return 0;
139 }

So, it is possible to define those in the doc comment.

> 
> 
> >> + */
> >> +int
> >> +rte_mempool_set_handler(struct rte_mempool *mp, const char *name);
> >> +
> >> +/**
> >> + * Register an external pool handler.
> >> + *
> >> + * @param h
> >> + *   Pointer to the external pool handler
> >> + * @return
> >> + *   - >=0: Sucess; return the index of the handler in the table.
> >> + *   - <0: Error (errno)  
> > Should this doc be more specific about the possible failures?  
> 
> This is up to the handler. We cannot know what codes will be returned at 
> this stage.
> errno was a cut-and paste error, this is now fixed.

Same, here... -ENOSPC, -EINVAL are returned in certain cases. And again,
this call is not handler-specific.

>
 
[...]

> >>   
> >> +
> >> +static struct rte_mempool_handler handler_mp_mc = {
> >> +	.name = "ring_mp_mc",
> >> +	.alloc = common_ring_alloc,
> >> +	.free = common_ring_free,
> >> +	.put = common_ring_mp_put,
> >> +	.get = common_ring_mc_get,
> >> +	.get_count = common_ring_get_count,
> >> +};
> >> +
> >> +static struct rte_mempool_handler handler_sp_sc = {
> >> +	.name = "ring_sp_sc",
> >> +	.alloc = common_ring_alloc,
> >> +	.free = common_ring_free,
> >> +	.put = common_ring_sp_put,
> >> +	.get = common_ring_sc_get,
> >> +	.get_count = common_ring_get_count,
> >> +};
> >> +
> >> +static struct rte_mempool_handler handler_mp_sc = {
> >> +	.name = "ring_mp_sc",
> >> +	.alloc = common_ring_alloc,
> >> +	.free = common_ring_free,
> >> +	.put = common_ring_mp_put,
> >> +	.get = common_ring_sc_get,
> >> +	.get_count = common_ring_get_count,
> >> +};
> >> +
> >> +static struct rte_mempool_handler handler_sp_mc = {
> >> +	.name = "ring_sp_mc",
> >> +	.alloc = common_ring_alloc,
> >> +	.free = common_ring_free,
> >> +	.put = common_ring_sp_put,
> >> +	.get = common_ring_mc_get,
> >> +	.get_count = common_ring_get_count,
> >> +};
> >> +  
> > Introducing those handlers can go as a separate patch. IMHO, that would simplify
> > the review process a lot. First introduce the mechanism, then add something
> > inside.
> >
> > I'd also note that those handlers are always available and what kind of memory
> > do they use...  
> 
> Done. Now added as a separate patch.
> 
> They don't use any memory unless they are used.

Yes, that is what I've meant... What is the source of the allocations? Where does
the common_ring_sc_get get memory from?

Anyway, any documentation describing the goal of those four declarations
would be helpful.

> 
> 
> >> +#include <stdio.h>
> >> +#include <string.h>
> >> +
> >> +#include <rte_mempool.h>
> >> +
> >> +/* indirect jump table to support external memory pools */
> >> +struct rte_mempool_handler_table rte_mempool_handler_table = {
> >> +	.sl =  RTE_SPINLOCK_INITIALIZER ,
> >> +	.num_handlers = 0
> >> +};
> >> +
> >> +/* add a new handler in rte_mempool_handler_table, return its index */  
> > It seems to me that there is no way how to put some opaque pointer into the
> > handler. In such case I would expect I can do something like:
> >
> > struct my_handler {
> > 	struct rte_mempool_handler h;
> > 	...
> > } handler;
> >
> > rte_mempool_handler_register(&handler.h);
> >
> > But I cannot because you copy the contents of the handler. By the way, this
> > should be documented.
> >
> > How can I pass an opaque pointer here? The only way I see is through the
> > rte_mempool.pool.  
> 
> I think have addressed this in a later patch, in the discussions with 
> Jerin on the list. But
> rather than passing data at register time, I pass it at create time (or 
> rather set_handler).
> And a new config void has been added to the mempool struct for this 
> purpose.

Ok, sounds promising. It just should be well specified to avoid situations
when accessing the the pool_config before it is set.

> 
> >   In that case, what about renaming the rte_mempool_handler
> > to rte_mempool_ops? Because semantically, it is not a handler, it just holds
> > the operations.
> >
> > This would improve some namings:
> >
> > rte_mempool_ext_alloc -> rte_mempool_ops_alloc
> > rte_mempool_ext_free -> rte_mempool_ops_free
> > rte_mempool_ext_get_count -> rte_mempool_ops_get_count
> > rte_mempool_handler_register -> rte_mempool_ops_register
> >
> > seems to be more readable to me. The *_ext_* mark does not say anything valuable.
> > It just scares a bit :).  
> 
> Agreed. Makes sense. The ext was intended to be 'external', but that's a 
> bit too generic, and not
> very intuitive. the 'ops' tag seems better to me. I've change this in 
> the latest patch.

Again, note that I've suggested to avoid the word _handler_ entirely.

[...]

> 
> > Regards
> > Jan  
> 
> Thanks, Jan. Very comprehensive.  I'll hopefully be pushing the latest 
> patch to the list later today (Tuesday 31st)

Cool, please CC me.

Jan

> 
> Regard,
> Dave.
> 
> 



-- 
   Jan Viktorin                  E-mail: Viktorin@RehiveTech.com
   System Architect              Web:    www.RehiveTech.com
   RehiveTech
   Brno, Czech Republic

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [dpdk-dev, v5, 2/3] app/test: test external mempool handler
  2016-05-31  9:17             ` Hunt, David
@ 2016-05-31 12:14               ` Jan Viktorin
  2016-05-31 20:40                 ` Olivier MATZ
  0 siblings, 1 reply; 238+ messages in thread
From: Jan Viktorin @ 2016-05-31 12:14 UTC (permalink / raw)
  To: Hunt, David; +Cc: dev, olivier.matz, yuanhan.liu, pmatilai

On Tue, 31 May 2016 10:17:41 +0100
"Hunt, David" <david.hunt@intel.com> wrote:

> Hi Jan,
> 
> On 5/23/2016 1:45 PM, Jan Viktorin wrote:
> > On Thu, 19 May 2016 14:45:00 +0100
> > David Hunt <david.hunt@intel.com> wrote:  
> 
> --snip--
> 

[...]

> >> +  
> > The test becomes quite complex. What about having several smaller
> > tests with a clear setup and cleanup steps?  
> 
> I guess that's something we can look at in the future. For the moment 
> can we leave it?

Yes, just a suggestion. I think, Olivier (maintainer) should request this if needed.

> 
> Thanks,
> Dave.
> 
> 
> 



-- 
   Jan Viktorin                  E-mail: Viktorin@RehiveTech.com
   System Architect              Web:    www.RehiveTech.com
   RehiveTech
   Brno, Czech Republic

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [dpdk-dev,v5,1/3] mempool: support external handler
  2016-05-31 12:06               ` Jan Viktorin
@ 2016-05-31 13:47                 ` Hunt, David
  2016-05-31 20:40                   ` Olivier MATZ
  0 siblings, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-05-31 13:47 UTC (permalink / raw)
  To: Jan Viktorin; +Cc: dev, olivier.matz, yuanhan.liu, pmatilai



On 5/31/2016 1:06 PM, Jan Viktorin wrote:
> On Tue, 31 May 2016 10:09:42 +0100
> "Hunt, David" <david.hunt@intel.com> wrote:
>
>> Hi Jan,
>>
> [...]
>
>>>> +typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp);
>>>> +
>>>> +/** Free the external pool. */
>>> What is the purpose of this callback?
>>> What exactly does it allocate?
>>> Some rte_mempool internals?
>>> Or the memory?
>>> What does it return?
>> This is the main allocate function of the handler. It is up to the
>> mempool handlers control.
>> The handler's alloc function does whatever it needs to do to grab memory
>> for this handler, and places
>> a pointer to it in the *pool opaque pointer in the rte_mempool struct.
>> In the default handler, *pool
>> points to a ring, in other handlers, it will mostlikely point to a
>> different type of data structure. It will
>> be transparent to the application programmer.
> Thanks for explanation. Please, add doc comments there.
>
> Should the *mp be const here? It is not clear to me whether the callback is
> expected to modify the mempool or not (eg. set the pool pointer).

Comment added. Not const, as the *pool is set.

>>>> +typedef void (*rte_mempool_free_t)(void *p);
>>>> +
>>>> +/** Put an object in the external pool. */
>>> Why this *_free callback does not accept the rte_mempool param?
>>>   
>> We're freeing the pool opaque data here.
> Add doc comments...

Done.

>>
>>>> +typedef int (*rte_mempool_put_t)(void *p, void * const *obj_table, unsigned n);
>>> What is the *p pointer?
>>> What is the obj_table?
>>> Why is it void *?
>>> Why is it const?
>>>   
>> The *p pointer is the opaque data for a given mempool handler (ring,
>> array, linked list, etc)
> Again, doc comments...
>
> I don't like the obj_table representation to be an array of void *. I could see
> it already in DPDK for defining Ethernet driver queues, so, it's probably not
> an issue. I just say, I would prefer some basic type safety like
>
> struct rte_mempool_obj {
> 	void *p;
> };
>
> Is there somebody with different opinions?
>
> [...]

Comments added. I've left as a void* for the moment.

>>>> +
>>>> +/** Structure defining a mempool handler. */
>>> Later in the text, I suggested to rename rte_mempool_handler to rte_mempool_ops.
>>> I believe that it explains the purpose of this struct better. It would improve
>>> consistency in function names (the *_ext_* mark is very strange and inconsistent).
>> I agree. I've gone through all the code and renamed to
>> rte_mempool_handler_ops.
> Ok. I meant rte_mempool_ops because I find the word "handler" to be redundant.

I prefer the use of the word handler, unless others also have opinions 
either way?

>
>>>> +/**
>>>> + * Set the handler of a mempool
>>>> + *
>>>> + * This can only be done on a mempool that is not populated, i.e. just after
>>>> + * a call to rte_mempool_create_empty().
>>>> + *
>>>> + * @param mp
>>>> + *   Pointer to the memory pool.
>>>> + * @param name
>>>> + *   Name of the handler.
>>>> + * @return
>>>> + *   - 0: Sucess; the new handler is configured.
>>>> + *   - <0: Error (errno)
>>> Should this doc be more specific about the possible failures?
>>>
>>> The body of rte_mempool_set_handler does not set errno at all.
>>> It returns e.g. -EEXIST.
>> This is up to the handler. We cannot know what codes will be returned at
>> this stage.
>> errno was a cut-and paste error, this is now fixed.
> I don't think so. The rte_mempool_set_handler is not handler-specific:
[...]
> So, it is possible to define those in the doc comment.

Ah, I see now. My mistake, I assumed a different function. Added now.

>>
>>>> + */
>>>> +int
>>>> +rte_mempool_set_handler(struct rte_mempool *mp, const char *name);
>>>> +
>>>> +/**
>>>> + * Register an external pool handler.
>>>> + *
>>>> + * @param h
>>>> + *   Pointer to the external pool handler
>>>> + * @return
>>>> + *   - >=0: Sucess; return the index of the handler in the table.
>>>> + *   - <0: Error (errno)
>>> Should this doc be more specific about the possible failures?
>> This is up to the handler. We cannot know what codes will be returned at
>> this stage.
>> errno was a cut-and paste error, this is now fixed.
> Same, here... -ENOSPC, -EINVAL are returned in certain cases. And again,
> this call is not handler-specific.

Yes, Added now to comments.

>   
> [...]
>
>>>>    
>>>> +
>>>> +static struct rte_mempool_handler handler_mp_mc = {
>>>> +	.name = "ring_mp_mc",
>>>> +	.alloc = common_ring_alloc,
>>>> +	.free = common_ring_free,
>>>> +	.put = common_ring_mp_put,
>>>> +	.get = common_ring_mc_get,
>>>> +	.get_count = common_ring_get_count,
>>>> +};
>>>> +
>>>> +static struct rte_mempool_handler handler_sp_sc = {
>>>> +	.name = "ring_sp_sc",
>>>> +	.alloc = common_ring_alloc,
>>>> +	.free = common_ring_free,
>>>> +	.put = common_ring_sp_put,
>>>> +	.get = common_ring_sc_get,
>>>> +	.get_count = common_ring_get_count,
>>>> +};
>>>> +
>>>> +static struct rte_mempool_handler handler_mp_sc = {
>>>> +	.name = "ring_mp_sc",
>>>> +	.alloc = common_ring_alloc,
>>>> +	.free = common_ring_free,
>>>> +	.put = common_ring_mp_put,
>>>> +	.get = common_ring_sc_get,
>>>> +	.get_count = common_ring_get_count,
>>>> +};
>>>> +
>>>> +static struct rte_mempool_handler handler_sp_mc = {
>>>> +	.name = "ring_sp_mc",
>>>> +	.alloc = common_ring_alloc,
>>>> +	.free = common_ring_free,
>>>> +	.put = common_ring_sp_put,
>>>> +	.get = common_ring_mc_get,
>>>> +	.get_count = common_ring_get_count,
>>>> +};
>>>> +
>>> Introducing those handlers can go as a separate patch. IMHO, that would simplify
>>> the review process a lot. First introduce the mechanism, then add something
>>> inside.
>>>
>>> I'd also note that those handlers are always available and what kind of memory
>>> do they use...
>> Done. Now added as a separate patch.
>>
>> They don't use any memory unless they are used.
> Yes, that is what I've meant... What is the source of the allocations? Where does
> the common_ring_sc_get get memory from?
>
> Anyway, any documentation describing the goal of those four declarations
> would be helpful.

For these handlers, the allocations are as per the original code before 
this patch. For new handlers,
hardware allocators, stack allocators, etc.

I've added comments on the 4 declarations.

>>
>>>> +#include <stdio.h>
>>>> +#include <string.h>
>>>> +
>>>> +#include <rte_mempool.h>
>>>> +
>>>> +/* indirect jump table to support external memory pools */
>>>> +struct rte_mempool_handler_table rte_mempool_handler_table = {
>>>> +	.sl =  RTE_SPINLOCK_INITIALIZER ,
>>>> +	.num_handlers = 0
>>>> +};
>>>> +
>>>> +/* add a new handler in rte_mempool_handler_table, return its index */
>>> It seems to me that there is no way how to put some opaque pointer into the
>>> handler. In such case I would expect I can do something like:
>>>
>>> struct my_handler {
>>> 	struct rte_mempool_handler h;
>>> 	...
>>> } handler;
>>>
>>> rte_mempool_handler_register(&handler.h);
>>>
>>> But I cannot because you copy the contents of the handler. By the way, this
>>> should be documented.
>>>
>>> How can I pass an opaque pointer here? The only way I see is through the
>>> rte_mempool.pool.
>> I think have addressed this in a later patch, in the discussions with
>> Jerin on the list. But
>> rather than passing data at register time, I pass it at create time (or
>> rather set_handler).
>> And a new config void has been added to the mempool struct for this
>> purpose.
> Ok, sounds promising. It just should be well specified to avoid situations
> when accessing the the pool_config before it is set.
>
>>>    In that case, what about renaming the rte_mempool_handler
>>> to rte_mempool_ops? Because semantically, it is not a handler, it just holds
>>> the operations.
>>>
>>> This would improve some namings:
>>>
>>> rte_mempool_ext_alloc -> rte_mempool_ops_alloc
>>> rte_mempool_ext_free -> rte_mempool_ops_free
>>> rte_mempool_ext_get_count -> rte_mempool_ops_get_count
>>> rte_mempool_handler_register -> rte_mempool_ops_register
>>>
>>> seems to be more readable to me. The *_ext_* mark does not say anything valuable.
>>> It just scares a bit :).
>> Agreed. Makes sense. The ext was intended to be 'external', but that's a
>> bit too generic, and not
>> very intuitive. the 'ops' tag seems better to me. I've change this in
>> the latest patch.
> Again, note that I've suggested to avoid the word _handler_ entirely.
>
> [...]
>
>>> Regards
>>> Jan
>> Thanks, Jan. Very comprehensive.  I'll hopefully be pushing the latest
>> patch to the list later today (Tuesday 31st)
> Cool, please CC me.

Will do.


Rgds,
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v5 1/3] mempool: support external handler
  2016-05-31  8:53                       ` Jerin Jacob
@ 2016-05-31 15:37                         ` Hunt, David
  2016-05-31 16:03                           ` Jerin Jacob
  0 siblings, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-05-31 15:37 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dev, olivier.matz, yuanhan.liu, pmatilai



On 5/31/2016 9:53 AM, Jerin Jacob wrote:
> On Mon, May 30, 2016 at 12:27:26PM +0100, Hunt, David wrote:
>> New mempool handlers will use rte_mempool_create_empty(),
>> rte_mempool_set_handler(),
>> then rte_mempool_populate_*(). These three functions are new to this
>> release, to no problem
> Having separate APIs for external pool-manager create is worrisome in
> application perspective. Is it possible to have rte_mempool_[xmem]_create
> for the both external and existing SW pool manager and make
> rte_mempool_create_empty and rte_mempool_populate_*  internal functions.
>
> IMO, We can do that by selecting  specific rte_mempool_set_handler()
> based on _flags_ encoding, something like below
>
> bit 0 - 16   // generic bits uses across all the pool managers
> bit 16 - 23  // pool handler specific flags bits
> bit 24 - 31  // to select the specific pool manager(Up to 256 different flavors of
> pool managers, For backward compatibility, make '0'(in 24-31) to select
> existing SW pool manager.
>
> and applications can choose the handlers by selecting the flag in
> rte_mempool_[xmem]_create, That way it will be easy in testpmd or any other
> applications to choose the pool handler from command line etc in future.

There might be issues with the 8-bit handler number, as we'd have to add 
an api call to
first get the index of a given hander by name, then OR it into the 
flags. That would mean
also extra API calls for the non-default external handlers. I do agree 
with the handler-specific
bits though.

Having the _empty and _set_handler  APIs seems to me to be OK for the
moment. Maybe Olivier could comment?

> and we can remove "mbuf: get default mempool handler from configuration"
> change-set OR just add if RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is defined then set
> the same with rte_mempool_set_handler in rte_mempool_[xmem]_create.
>
> What do you think?

The "configuration" patch is to allow users to quickly change the 
mempool handler
by changing RTE_MBUF_DEFAULT_MEMPOOL_HANDLER to another string of a known
handler. It could just as easily be left out and use the 
rte_mempool_create.

>> to add a parameter to one of them for the config data. Also since we're
>> adding some new
>> elements to the mempool structure, how about we add a new pointer for a void
>> pointer to a
>> config data structure, as defined by the handler.
>>
>> So, new element in rte_mempool struct alongside the *pool
>>      void *pool;
>>      void *pool_config;
>>
>> Then add a param to the rte_mempool_set_handler function:
>> int
>> rte_mempool_set_handler(struct rte_mempool *mp, const char *name, void
>> *pool_config)
> IMO, Maybe we need to have _set_ and _get_.So I think we can have
> two separate callback in external pool-manger for that if required.
> IMO, For now, We can live with pool manager specific 8 bits(bit 16 -23)
> for the configuration as mentioned above and add the new callbacks for
> set and get when required?

OK, We'll keep the config to the 8 bits of the flags for now. That will 
also mean I won't
add the pool_config void pointer either (for the moment)

>>> 2) IMO, It is better to change void *pool in struct rte_mempool to
>>> anonymous union type, something like below, so that mempool
>>> implementation can choose the best type.
>>> 	union {
>>> 		void *pool;
>>> 		uint64_t val;
>>> 	}
>> Could we do this by using the union for the *pool_config suggested above,
>> would that give
>> you what you need?
> It would be an extra overhead for external pool manager to _alloc_ memory
> and store the allocated pointer in mempool struct(as *pool) and use pool for
> pointing other data structures as some implementation need only
> limited bytes to store the external pool manager specific context.
>
> In order to fix this problem, We may classify fast path and slow path
> elements in struct rte_mempool and move all fast path elements in first
> cache line and create an empty opaque space in the remaining bytes in the
> cache line so that if the external pool manager needs only limited space
> then it is not required to allocate the separate memory to save the
> per core cache  in fast-path
>
> something like below,
> union {
> 	void *pool;
> 	uint64_t val;
> 	uint8_t extra_mem[16] // available free bytes in fast path cache line
>
> }

Something for the future, perhaps? Will the 8-bits in the flags suffice 
for now?


> Other points,
>
> 1) Is it possible to remove unused is_mp in  __mempool_put_bulk
> function as it is just a internal function.

Fixed

> 2) Considering "get" and "put" are the fast-path callbacks for
> pool-manger, Is it possible to avoid the extra overhead of the following
> _load_ and additional cache line on each call,
> rte_mempool_handler_table.handler[handler_idx]
>
> I understand it is for multiprocess support but I am thing can we
> introduce something like ethernet API support for multiprocess and
> resolve "put" and "get" functions pointer on init and store in
> struct mempool. Some thinking like
>
> file: drivers/net/ixgbe/ixgbe_ethdev.c
> search for if (rte_eal_process_type() != RTE_PROC_PRIMARY) {

I'll look at this one before posting the next version of the patch 
(soon). :)


> Jerin
>
Thanks for your input on this, much appreciated.
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v5 1/3] mempool: support external handler
  2016-05-31 15:37                         ` Hunt, David
@ 2016-05-31 16:03                           ` Jerin Jacob
  2016-05-31 20:41                             ` Olivier MATZ
  0 siblings, 1 reply; 238+ messages in thread
From: Jerin Jacob @ 2016-05-31 16:03 UTC (permalink / raw)
  To: Hunt, David; +Cc: dev, olivier.matz, yuanhan.liu, pmatilai

On Tue, May 31, 2016 at 04:37:02PM +0100, Hunt, David wrote:
> 
> 
> On 5/31/2016 9:53 AM, Jerin Jacob wrote:
> > On Mon, May 30, 2016 at 12:27:26PM +0100, Hunt, David wrote:
> > > New mempool handlers will use rte_mempool_create_empty(),
> > > rte_mempool_set_handler(),
> > > then rte_mempool_populate_*(). These three functions are new to this
> > > release, to no problem
> > Having separate APIs for external pool-manager create is worrisome in
> > application perspective. Is it possible to have rte_mempool_[xmem]_create
> > for the both external and existing SW pool manager and make
> > rte_mempool_create_empty and rte_mempool_populate_*  internal functions.
> > 
> > IMO, We can do that by selecting  specific rte_mempool_set_handler()
> > based on _flags_ encoding, something like below
> > 
> > bit 0 - 16   // generic bits uses across all the pool managers
> > bit 16 - 23  // pool handler specific flags bits
> > bit 24 - 31  // to select the specific pool manager(Up to 256 different flavors of
> > pool managers, For backward compatibility, make '0'(in 24-31) to select
> > existing SW pool manager.
> > 
> > and applications can choose the handlers by selecting the flag in
> > rte_mempool_[xmem]_create, That way it will be easy in testpmd or any other
> > applications to choose the pool handler from command line etc in future.
> 
> There might be issues with the 8-bit handler number, as we'd have to add an
> api call to
> first get the index of a given hander by name, then OR it into the flags.
> That would mean
> also extra API calls for the non-default external handlers. I do agree with
> the handler-specific
> bits though.

That would be an internal API(upper 8 bits to handler name). Right ?
Seems to be OK for me.

> 
> Having the _empty and _set_handler  APIs seems to me to be OK for the
> moment. Maybe Olivier could comment?
> 

But need 3 APIs. Right? _empty , _set_handler and _populate ? I believe
it is better reduce the public API in spec where ever possible ?

Maybe Olivier could comment ?


> > and we can remove "mbuf: get default mempool handler from configuration"
> > change-set OR just add if RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is defined then set
> > the same with rte_mempool_set_handler in rte_mempool_[xmem]_create.
> > 
> > What do you think?
> 
> The "configuration" patch is to allow users to quickly change the mempool
> handler
> by changing RTE_MBUF_DEFAULT_MEMPOOL_HANDLER to another string of a known
> handler. It could just as easily be left out and use the rte_mempool_create.
>

Yes, I understand, but I am trying to avoid build time constant. IMO, It
would be better by default RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is not
defined in config. and for quick change developers can introduce the build 
with RTE_MBUF_DEFAULT_MEMPOOL_HANDLER="specific handler"

 
> > > to add a parameter to one of them for the config data. Also since we're
> > > adding some new
> > > elements to the mempool structure, how about we add a new pointer for a void
> > > pointer to a
> > > config data structure, as defined by the handler.
> > > 
> > > So, new element in rte_mempool struct alongside the *pool
> > >      void *pool;
> > >      void *pool_config;
> > > 
> > > Then add a param to the rte_mempool_set_handler function:
> > > int
> > > rte_mempool_set_handler(struct rte_mempool *mp, const char *name, void
> > > *pool_config)
> > IMO, Maybe we need to have _set_ and _get_.So I think we can have
> > two separate callback in external pool-manger for that if required.
> > IMO, For now, We can live with pool manager specific 8 bits(bit 16 -23)
> > for the configuration as mentioned above and add the new callbacks for
> > set and get when required?
> 
> OK, We'll keep the config to the 8 bits of the flags for now. That will also
> mean I won't
> add the pool_config void pointer either (for the moment)

OK to me.

> 
> > > > 2) IMO, It is better to change void *pool in struct rte_mempool to
> > > > anonymous union type, something like below, so that mempool
> > > > implementation can choose the best type.
> > > > 	union {
> > > > 		void *pool;
> > > > 		uint64_t val;
> > > > 	}
> > > Could we do this by using the union for the *pool_config suggested above,
> > > would that give
> > > you what you need?
> > It would be an extra overhead for external pool manager to _alloc_ memory
> > and store the allocated pointer in mempool struct(as *pool) and use pool for
> > pointing other data structures as some implementation need only
> > limited bytes to store the external pool manager specific context.
> > 
> > In order to fix this problem, We may classify fast path and slow path
> > elements in struct rte_mempool and move all fast path elements in first
> > cache line and create an empty opaque space in the remaining bytes in the
> > cache line so that if the external pool manager needs only limited space
> > then it is not required to allocate the separate memory to save the
> > per core cache  in fast-path
> > 
> > something like below,
> > union {
> > 	void *pool;
> > 	uint64_t val;
> > 	uint8_t extra_mem[16] // available free bytes in fast path cache line
> > 
> > }
> 
> Something for the future, perhaps? Will the 8-bits in the flags suffice for
> now?

OK. But simple anonymous union for same type should be OK add now? Not
much change I believe, If its difficult then postpone it

union {
	void *pool;
	uint64_t val;
}

> 
> 
> > Other points,
> > 
> > 1) Is it possible to remove unused is_mp in  __mempool_put_bulk
> > function as it is just a internal function.
> 
> Fixed

__mempool_get_bulk too.

 
> > 2) Considering "get" and "put" are the fast-path callbacks for
> > pool-manger, Is it possible to avoid the extra overhead of the following
> > _load_ and additional cache line on each call,
> > rte_mempool_handler_table.handler[handler_idx]
> > 
> > I understand it is for multiprocess support but I am thing can we
> > introduce something like ethernet API support for multiprocess and
> > resolve "put" and "get" functions pointer on init and store in
> > struct mempool. Some thinking like
> > 
> > file: drivers/net/ixgbe/ixgbe_ethdev.c
> > search for if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
> 
> I'll look at this one before posting the next version of the patch (soon).
> :)

OK

> 
> 
> > Jerin
> > 
> Thanks for your input on this, much appreciated.
> Dave.
> 

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [dpdk-dev, v5, 2/3] app/test: test external mempool handler
  2016-05-31 12:14               ` Jan Viktorin
@ 2016-05-31 20:40                 ` Olivier MATZ
  0 siblings, 0 replies; 238+ messages in thread
From: Olivier MATZ @ 2016-05-31 20:40 UTC (permalink / raw)
  To: Jan Viktorin, Hunt, David; +Cc: dev, yuanhan.liu, pmatilai

Hi,

On 05/31/2016 02:14 PM, Jan Viktorin wrote:
> On Tue, 31 May 2016 10:17:41 +0100
> "Hunt, David" <david.hunt@intel.com> wrote:
>> On 5/23/2016 1:45 PM, Jan Viktorin wrote:
>>> The test becomes quite complex. What about having several smaller
>>> tests with a clear setup and cleanup steps?  
>>
>> I guess that's something we can look at in the future. For the moment 
>> can we leave it?
> 
> Yes, just a suggestion. I think, Olivier (maintainer) should request this if needed.

Yes, I think we can let it as is for now. But I agree this is something
we could enhance.

Thanks
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [dpdk-dev,v5,1/3] mempool: support external handler
  2016-05-31 13:47                 ` Hunt, David
@ 2016-05-31 20:40                   ` Olivier MATZ
  2016-06-01  9:39                     ` Hunt, David
  2016-06-01 12:30                     ` Jan Viktorin
  0 siblings, 2 replies; 238+ messages in thread
From: Olivier MATZ @ 2016-05-31 20:40 UTC (permalink / raw)
  To: Hunt, David, Jan Viktorin; +Cc: dev, yuanhan.liu, pmatilai, jerin.jacob

Hi,

On 05/31/2016 03:47 PM, Hunt, David wrote:
> On 5/31/2016 1:06 PM, Jan Viktorin wrote:
>> On Tue, 31 May 2016 10:09:42 +0100
>> "Hunt, David" <david.hunt@intel.com> wrote:
>>
>>> The *p pointer is the opaque data for a given mempool handler (ring,
>>> array, linked list, etc)
>> Again, doc comments...
>>
>> I don't like the obj_table representation to be an array of void *. I
>> could see
>> it already in DPDK for defining Ethernet driver queues, so, it's
>> probably not
>> an issue. I just say, I would prefer some basic type safety like
>>
>> struct rte_mempool_obj {
>>     void *p;
>> };
>>
>> Is there somebody with different opinions?
>>
>> [...]
> 
> Comments added. I've left as a void* for the moment.

Jan, could you please detail why you think having a
rte_mempool_obj structure brings more safety?

For now, I'm in favor of keeping the array of void *, because
that's what we use in other mempool or ring functions.


>>>>> +/** Structure defining a mempool handler. */
>>>> Later in the text, I suggested to rename rte_mempool_handler to
>>>> rte_mempool_ops.
>>>> I believe that it explains the purpose of this struct better. It
>>>> would improve
>>>> consistency in function names (the *_ext_* mark is very strange and
>>>> inconsistent).
>>> I agree. I've gone through all the code and renamed to
>>> rte_mempool_handler_ops.
>> Ok. I meant rte_mempool_ops because I find the word "handler" to be
>> redundant.
> 
> I prefer the use of the word handler, unless others also have opinions
> either way?

Well, I think rte_mempool_ops is clear enough, and shorter,
so I'd vote for it.


Regards,
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v5 1/3] mempool: support external handler
  2016-05-31 16:03                           ` Jerin Jacob
@ 2016-05-31 20:41                             ` Olivier MATZ
  2016-05-31 21:11                               ` Jerin Jacob
  0 siblings, 1 reply; 238+ messages in thread
From: Olivier MATZ @ 2016-05-31 20:41 UTC (permalink / raw)
  To: Jerin Jacob, Hunt, David; +Cc: dev, yuanhan.liu, pmatilai, Jan Viktorin

Hi,

On 05/31/2016 06:03 PM, Jerin Jacob wrote:
> On Tue, May 31, 2016 at 04:37:02PM +0100, Hunt, David wrote:
>>
>>
>> On 5/31/2016 9:53 AM, Jerin Jacob wrote:
>>> On Mon, May 30, 2016 at 12:27:26PM +0100, Hunt, David wrote:
>>>> New mempool handlers will use rte_mempool_create_empty(),
>>>> rte_mempool_set_handler(),
>>>> then rte_mempool_populate_*(). These three functions are new to this
>>>> release, to no problem
>>> Having separate APIs for external pool-manager create is worrisome in
>>> application perspective. Is it possible to have rte_mempool_[xmem]_create
>>> for the both external and existing SW pool manager and make
>>> rte_mempool_create_empty and rte_mempool_populate_*  internal functions.
>>>
>>> IMO, We can do that by selecting  specific rte_mempool_set_handler()
>>> based on _flags_ encoding, something like below
>>>
>>> bit 0 - 16   // generic bits uses across all the pool managers
>>> bit 16 - 23  // pool handler specific flags bits
>>> bit 24 - 31  // to select the specific pool manager(Up to 256 different flavors of
>>> pool managers, For backward compatibility, make '0'(in 24-31) to select
>>> existing SW pool manager.
>>>
>>> and applications can choose the handlers by selecting the flag in
>>> rte_mempool_[xmem]_create, That way it will be easy in testpmd or any other
>>> applications to choose the pool handler from command line etc in future.
>>
>> There might be issues with the 8-bit handler number, as we'd have to add an
>> api call to
>> first get the index of a given hander by name, then OR it into the flags.
>> That would mean
>> also extra API calls for the non-default external handlers. I do agree with
>> the handler-specific
>> bits though.
> 
> That would be an internal API(upper 8 bits to handler name). Right ?
> Seems to be OK for me.
> 
>>
>> Having the _empty and _set_handler  APIs seems to me to be OK for the
>> moment. Maybe Olivier could comment?
>>
> 
> But need 3 APIs. Right? _empty , _set_handler and _populate ? I believe
> it is better reduce the public API in spec where ever possible ?
> 
> Maybe Olivier could comment ?

Well, I think having 3 different functions is not a problem if the API
is clearer.

In my opinion, the following:
	rte_mempool_create_empty()
	rte_mempool_set_handler()
	rte_mempool_populate()

is clearer than:
	rte_mempool_create(15 args)

Splitting the flags into 3 groups, with one not beeing flags but a
pool handler number looks overcomplicated from a user perspective.

>>> and we can remove "mbuf: get default mempool handler from configuration"
>>> change-set OR just add if RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is defined then set
>>> the same with rte_mempool_set_handler in rte_mempool_[xmem]_create.
>>>
>>> What do you think?
>>
>> The "configuration" patch is to allow users to quickly change the mempool
>> handler
>> by changing RTE_MBUF_DEFAULT_MEMPOOL_HANDLER to another string of a known
>> handler. It could just as easily be left out and use the rte_mempool_create.
>>
> 
> Yes, I understand, but I am trying to avoid build time constant. IMO, It
> would be better by default RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is not
> defined in config. and for quick change developers can introduce the build 
> with RTE_MBUF_DEFAULT_MEMPOOL_HANDLER="specific handler"

My understanding of the compile-time configuration option was
to allow a specific architecture to define a specific hw-assisted
handler by default.

Indeed, if there is no such need for now, we may remove it. But
we need a way to select another handler, at least in test-pmd
(in command line arguments?).


>>>> to add a parameter to one of them for the config data. Also since we're
>>>> adding some new
>>>> elements to the mempool structure, how about we add a new pointer for a void
>>>> pointer to a
>>>> config data structure, as defined by the handler.
>>>>
>>>> So, new element in rte_mempool struct alongside the *pool
>>>>      void *pool;
>>>>      void *pool_config;
>>>>
>>>> Then add a param to the rte_mempool_set_handler function:
>>>> int
>>>> rte_mempool_set_handler(struct rte_mempool *mp, const char *name, void
>>>> *pool_config)
>>> IMO, Maybe we need to have _set_ and _get_.So I think we can have
>>> two separate callback in external pool-manger for that if required.
>>> IMO, For now, We can live with pool manager specific 8 bits(bit 16 -23)
>>> for the configuration as mentioned above and add the new callbacks for
>>> set and get when required?
>>
>> OK, We'll keep the config to the 8 bits of the flags for now. That will also
>> mean I won't
>> add the pool_config void pointer either (for the moment)
> 
> OK to me.

I'm not sure I'm getting it. Does it mean having something like
this ?

rte_mempool_set_handler(struct rte_mempool *mp, const char *name,
	unsigned int flags)

Or does it mean some of the flags passed to rte_mempool_create*()
will be specific to some handlers?


Before adding handler-specific flags or config, can we ensure we
will need them? What kind of handler-specific configuration/flags
do you think we will need? Just an idea: what about having a global
configuration for all mempools using a given handler?



>>>>> 2) IMO, It is better to change void *pool in struct rte_mempool to
>>>>> anonymous union type, something like below, so that mempool
>>>>> implementation can choose the best type.
>>>>> 	union {
>>>>> 		void *pool;
>>>>> 		uint64_t val;
>>>>> 	}
>>>> Could we do this by using the union for the *pool_config suggested above,
>>>> would that give
>>>> you what you need?
>>> It would be an extra overhead for external pool manager to _alloc_ memory
>>> and store the allocated pointer in mempool struct(as *pool) and use pool for
>>> pointing other data structures as some implementation need only
>>> limited bytes to store the external pool manager specific context.
>>>
>>> In order to fix this problem, We may classify fast path and slow path
>>> elements in struct rte_mempool and move all fast path elements in first
>>> cache line and create an empty opaque space in the remaining bytes in the
>>> cache line so that if the external pool manager needs only limited space
>>> then it is not required to allocate the separate memory to save the
>>> per core cache  in fast-path
>>>
>>> something like below,
>>> union {
>>> 	void *pool;
>>> 	uint64_t val;
>>> 	uint8_t extra_mem[16] // available free bytes in fast path cache line
>>>
>>> }
>>
>> Something for the future, perhaps? Will the 8-bits in the flags suffice for
>> now?
> 
> OK. But simple anonymous union for same type should be OK add now? Not
> much change I believe, If its difficult then postpone it
> 
> union {
> 	void *pool;
> 	uint64_t val;
> }

I'm ok with the simple union with (void *) and (uint64_t).
Maybe "val" should be replaced by something more appropriate.
Is "pool_id" a better name?


Thanks David for working on this, and thanks Jerin and Jan for
the good comments and suggestions!

Regards
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v5 1/3] mempool: support external handler
  2016-05-31 20:41                             ` Olivier MATZ
@ 2016-05-31 21:11                               ` Jerin Jacob
  2016-06-01 10:46                                 ` Hunt, David
  0 siblings, 1 reply; 238+ messages in thread
From: Jerin Jacob @ 2016-05-31 21:11 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: Hunt, David, dev, yuanhan.liu, pmatilai, Jan Viktorin

On Tue, May 31, 2016 at 10:41:00PM +0200, Olivier MATZ wrote:
> Hi,
> 
> On 05/31/2016 06:03 PM, Jerin Jacob wrote:
> > On Tue, May 31, 2016 at 04:37:02PM +0100, Hunt, David wrote:
> >>
> >>
> >> On 5/31/2016 9:53 AM, Jerin Jacob wrote:
> >>> On Mon, May 30, 2016 at 12:27:26PM +0100, Hunt, David wrote:
> >>>> New mempool handlers will use rte_mempool_create_empty(),
> >>>> rte_mempool_set_handler(),
> >>>> then rte_mempool_populate_*(). These three functions are new to this
> >>>> release, to no problem
> >>> Having separate APIs for external pool-manager create is worrisome in
> >>> application perspective. Is it possible to have rte_mempool_[xmem]_create
> >>> for the both external and existing SW pool manager and make
> >>> rte_mempool_create_empty and rte_mempool_populate_*  internal functions.
> >>>
> >>> IMO, We can do that by selecting  specific rte_mempool_set_handler()
> >>> based on _flags_ encoding, something like below
> >>>
> >>> bit 0 - 16   // generic bits uses across all the pool managers
> >>> bit 16 - 23  // pool handler specific flags bits
> >>> bit 24 - 31  // to select the specific pool manager(Up to 256 different flavors of
> >>> pool managers, For backward compatibility, make '0'(in 24-31) to select
> >>> existing SW pool manager.
> >>>
> >>> and applications can choose the handlers by selecting the flag in
> >>> rte_mempool_[xmem]_create, That way it will be easy in testpmd or any other
> >>> applications to choose the pool handler from command line etc in future.
> >>
> >> There might be issues with the 8-bit handler number, as we'd have to add an
> >> api call to
> >> first get the index of a given hander by name, then OR it into the flags.
> >> That would mean
> >> also extra API calls for the non-default external handlers. I do agree with
> >> the handler-specific
> >> bits though.
> > 
> > That would be an internal API(upper 8 bits to handler name). Right ?
> > Seems to be OK for me.
> > 
> >>
> >> Having the _empty and _set_handler  APIs seems to me to be OK for the
> >> moment. Maybe Olivier could comment?
> >>
> > 
> > But need 3 APIs. Right? _empty , _set_handler and _populate ? I believe
> > it is better reduce the public API in spec where ever possible ?
> > 
> > Maybe Olivier could comment ?
> 
> Well, I think having 3 different functions is not a problem if the API
> is clearer.
> 
> In my opinion, the following:
> 	rte_mempool_create_empty()
> 	rte_mempool_set_handler()
> 	rte_mempool_populate()
> 
> is clearer than:
> 	rte_mempool_create(15 args)

But proposed scheme is not adding any new arguments to
rte_mempool_create. It just extending the existing flag.

rte_mempool_create(15 args) is still their as API for internal pool
creation.

> 
> Splitting the flags into 3 groups, with one not beeing flags but a
> pool handler number looks overcomplicated from a user perspective.

I am concerned with seem less integration with existing applications,
IMO, Its not worth having separate functions for external vs internal
pool creation for application(now each every applications has to added this
logic every where for no good reason), just my 2 cents.

> 
> >>> and we can remove "mbuf: get default mempool handler from configuration"
> >>> change-set OR just add if RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is defined then set
> >>> the same with rte_mempool_set_handler in rte_mempool_[xmem]_create.
> >>>
> >>> What do you think?
> >>
> >> The "configuration" patch is to allow users to quickly change the mempool
> >> handler
> >> by changing RTE_MBUF_DEFAULT_MEMPOOL_HANDLER to another string of a known
> >> handler. It could just as easily be left out and use the rte_mempool_create.
> >>
> > 
> > Yes, I understand, but I am trying to avoid build time constant. IMO, It
> > would be better by default RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is not
> > defined in config. and for quick change developers can introduce the build 
> > with RTE_MBUF_DEFAULT_MEMPOOL_HANDLER="specific handler"
> 
> My understanding of the compile-time configuration option was
> to allow a specific architecture to define a specific hw-assisted
> handler by default.
> 
> Indeed, if there is no such need for now, we may remove it. But
> we need a way to select another handler, at least in test-pmd
> (in command line arguments?).

like txflags in testpmd, IMO, mempool flags will help to select the handlers
seamlessly as suggest above.

If we are _not_ taking the flags based selection scheme then it makes to
keep RTE_MBUF_DEFAULT_MEMPOOL_HANDLER

> 
> 
> >>>> to add a parameter to one of them for the config data. Also since we're
> >>>> adding some new
> >>>> elements to the mempool structure, how about we add a new pointer for a void
> >>>> pointer to a
> >>>> config data structure, as defined by the handler.
> >>>>
> >>>> So, new element in rte_mempool struct alongside the *pool
> >>>>      void *pool;
> >>>>      void *pool_config;
> >>>>
> >>>> Then add a param to the rte_mempool_set_handler function:
> >>>> int
> >>>> rte_mempool_set_handler(struct rte_mempool *mp, const char *name, void
> >>>> *pool_config)
> >>> IMO, Maybe we need to have _set_ and _get_.So I think we can have
> >>> two separate callback in external pool-manger for that if required.
> >>> IMO, For now, We can live with pool manager specific 8 bits(bit 16 -23)
> >>> for the configuration as mentioned above and add the new callbacks for
> >>> set and get when required?
> >>
> >> OK, We'll keep the config to the 8 bits of the flags for now. That will also
> >> mean I won't
> >> add the pool_config void pointer either (for the moment)
> > 
> > OK to me.
> 
> I'm not sure I'm getting it. Does it mean having something like
> this ?
> 
> rte_mempool_set_handler(struct rte_mempool *mp, const char *name,
> 	unsigned int flags)
> 
> Or does it mean some of the flags passed to rte_mempool_create*()
> will be specific to some handlers?
> 
> 
> Before adding handler-specific flags or config, can we ensure we
> will need them? What kind of handler-specific configuration/flags
> do you think we will need? Just an idea: what about having a global
> configuration for all mempools using a given handler?

We may need to configure external pool manager like don't free the
packets back to pool  after it has been send out(just an example of
valid external HW pool manager configuration)

> 
> 
> 
> >>>>> 2) IMO, It is better to change void *pool in struct rte_mempool to
> >>>>> anonymous union type, something like below, so that mempool
> >>>>> implementation can choose the best type.
> >>>>> 	union {
> >>>>> 		void *pool;
> >>>>> 		uint64_t val;
> >>>>> 	}
> >>>> Could we do this by using the union for the *pool_config suggested above,
> >>>> would that give
> >>>> you what you need?
> >>> It would be an extra overhead for external pool manager to _alloc_ memory
> >>> and store the allocated pointer in mempool struct(as *pool) and use pool for
> >>> pointing other data structures as some implementation need only
> >>> limited bytes to store the external pool manager specific context.
> >>>
> >>> In order to fix this problem, We may classify fast path and slow path
> >>> elements in struct rte_mempool and move all fast path elements in first
> >>> cache line and create an empty opaque space in the remaining bytes in the
> >>> cache line so that if the external pool manager needs only limited space
> >>> then it is not required to allocate the separate memory to save the
> >>> per core cache  in fast-path
> >>>
> >>> something like below,
> >>> union {
> >>> 	void *pool;
> >>> 	uint64_t val;
> >>> 	uint8_t extra_mem[16] // available free bytes in fast path cache line
> >>>
> >>> }
> >>
> >> Something for the future, perhaps? Will the 8-bits in the flags suffice for
> >> now?
> > 
> > OK. But simple anonymous union for same type should be OK add now? Not
> > much change I believe, If its difficult then postpone it
> > 
> > union {
> > 	void *pool;
> > 	uint64_t val;
> > }
> 
> I'm ok with the simple union with (void *) and (uint64_t).
> Maybe "val" should be replaced by something more appropriate.
> Is "pool_id" a better name?

How about "opaque"?

> 
> 
> Thanks David for working on this, and thanks Jerin and Jan for
> the good comments and suggestions!
> 
> Regards
> Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [dpdk-dev,v5,1/3] mempool: support external handler
  2016-05-31 20:40                   ` Olivier MATZ
@ 2016-06-01  9:39                     ` Hunt, David
  2016-06-01 12:30                     ` Jan Viktorin
  1 sibling, 0 replies; 238+ messages in thread
From: Hunt, David @ 2016-06-01  9:39 UTC (permalink / raw)
  To: Olivier MATZ, Jan Viktorin; +Cc: dev, yuanhan.liu, pmatilai, jerin.jacob



On 5/31/2016 9:40 PM, Olivier MATZ wrote:

[...]

>>>>>> +/** Structure defining a mempool handler. */
>>>>> Later in the text, I suggested to rename rte_mempool_handler to
>>>>> rte_mempool_ops.
>>>>> I believe that it explains the purpose of this struct better. It
>>>>> would improve
>>>>> consistency in function names (the *_ext_* mark is very strange and
>>>>> inconsistent).
>>>> I agree. I've gone through all the code and renamed to
>>>> rte_mempool_handler_ops.
>>> Ok. I meant rte_mempool_ops because I find the word "handler" to be
>>> redundant.
>> I prefer the use of the word handler, unless others also have opinions
>> either way?
> Well, I think rte_mempool_ops is clear enough, and shorter,
> so I'd vote for it.
>

OK, I've just changed it. It will be in next revision. :)

Regards,
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v5 1/3] mempool: support external handler
  2016-05-31 21:11                               ` Jerin Jacob
@ 2016-06-01 10:46                                 ` Hunt, David
  2016-06-01 11:18                                   ` Jerin Jacob
  0 siblings, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-06-01 10:46 UTC (permalink / raw)
  To: Jerin Jacob, Olivier MATZ; +Cc: dev, yuanhan.liu, pmatilai, Jan Viktorin



On 5/31/2016 10:11 PM, Jerin Jacob wrote:
> On Tue, May 31, 2016 at 10:41:00PM +0200, Olivier MATZ wrote:
>> Hi,
>>
>> On 05/31/2016 06:03 PM, Jerin Jacob wrote:
>>> On Tue, May 31, 2016 at 04:37:02PM +0100, Hunt, David wrote:
>>>>
>>>> On 5/31/2016 9:53 AM, Jerin Jacob wrote:
>>>>> On Mon, May 30, 2016 at 12:27:26PM +0100, Hunt, David wrote:
>>>>>> New mempool handlers will use rte_mempool_create_empty(),
>>>>>> rte_mempool_set_handler(),
>>>>>> then rte_mempool_populate_*(). These three functions are new to this
>>>>>> release, to no problem
>>>>> Having separate APIs for external pool-manager create is worrisome in
>>>>> application perspective. Is it possible to have rte_mempool_[xmem]_create
>>>>> for the both external and existing SW pool manager and make
>>>>> rte_mempool_create_empty and rte_mempool_populate_*  internal functions.
>>>>>
>>>>> IMO, We can do that by selecting  specific rte_mempool_set_handler()
>>>>> based on _flags_ encoding, something like below
>>>>>
>>>>> bit 0 - 16   // generic bits uses across all the pool managers
>>>>> bit 16 - 23  // pool handler specific flags bits
>>>>> bit 24 - 31  // to select the specific pool manager(Up to 256 different flavors of
>>>>> pool managers, For backward compatibility, make '0'(in 24-31) to select
>>>>> existing SW pool manager.
>>>>>
>>>>> and applications can choose the handlers by selecting the flag in
>>>>> rte_mempool_[xmem]_create, That way it will be easy in testpmd or any other
>>>>> applications to choose the pool handler from command line etc in future.
>>>> There might be issues with the 8-bit handler number, as we'd have to add an
>>>> api call to
>>>> first get the index of a given hander by name, then OR it into the flags.
>>>> That would mean
>>>> also extra API calls for the non-default external handlers. I do agree with
>>>> the handler-specific
>>>> bits though.
>>> That would be an internal API(upper 8 bits to handler name). Right ?
>>> Seems to be OK for me.
>>>
>>>> Having the _empty and _set_handler  APIs seems to me to be OK for the
>>>> moment. Maybe Olivier could comment?
>>>>
>>> But need 3 APIs. Right? _empty , _set_handler and _populate ? I believe
>>> it is better reduce the public API in spec where ever possible ?
>>>
>>> Maybe Olivier could comment ?
>> Well, I think having 3 different functions is not a problem if the API
>> is clearer.
>>
>> In my opinion, the following:
>> 	rte_mempool_create_empty()
>> 	rte_mempool_set_handler()
>> 	rte_mempool_populate()
>>
>> is clearer than:
>> 	rte_mempool_create(15 args)
> But proposed scheme is not adding any new arguments to
> rte_mempool_create. It just extending the existing flag.
>
> rte_mempool_create(15 args) is still their as API for internal pool
> creation.
>
>> Splitting the flags into 3 groups, with one not beeing flags but a
>> pool handler number looks overcomplicated from a user perspective.
> I am concerned with seem less integration with existing applications,
> IMO, Its not worth having separate functions for external vs internal
> pool creation for application(now each every applications has to added this
> logic every where for no good reason), just my 2 cents.

I think that there is always going to be some  extra code in the 
applications
  that want to use an external mempool. The _set_handler approach does
create, set_hander, populate. The Flags method queries the handler list to
get the index, sets the flags bits, then calls create. Both methods will 
work.

But I think the _set_handler approach is more user friendly, therefore that
it the method I would lean towards.

>>>>> and we can remove "mbuf: get default mempool handler from configuration"
>>>>> change-set OR just add if RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is defined then set
>>>>> the same with rte_mempool_set_handler in rte_mempool_[xmem]_create.
>>>>>
>>>>> What do you think?
>>>> The "configuration" patch is to allow users to quickly change the mempool
>>>> handler
>>>> by changing RTE_MBUF_DEFAULT_MEMPOOL_HANDLER to another string of a known
>>>> handler. It could just as easily be left out and use the rte_mempool_create.
>>>>
>>> Yes, I understand, but I am trying to avoid build time constant. IMO, It
>>> would be better by default RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is not
>>> defined in config. and for quick change developers can introduce the build
>>> with RTE_MBUF_DEFAULT_MEMPOOL_HANDLER="specific handler"
>> My understanding of the compile-time configuration option was
>> to allow a specific architecture to define a specific hw-assisted
>> handler by default.
>>
>> Indeed, if there is no such need for now, we may remove it. But
>> we need a way to select another handler, at least in test-pmd
>> (in command line arguments?).
> like txflags in testpmd, IMO, mempool flags will help to select the handlers
> seamlessly as suggest above.
>
> If we are _not_ taking the flags based selection scheme then it makes to
> keep RTE_MBUF_DEFAULT_MEMPOOL_HANDLER

see comment above

>>>>>>> 2) IMO, It is better to change void *pool in struct rte_mempool to
>>>>>>> anonymous union type, something like below, so that mempool
>>>>>>> implementation can choose the best type.
>>>>>>> 	union {
>>>>>>> 		void *pool;
>>>>>>> 		uint64_t val;
>>>>>>> 	}
>>>>>> Could we do this by using the union for the *pool_config suggested above,
>>>>>> would that give
>>>>>> you what you need?
>>>>> It would be an extra overhead for external pool manager to _alloc_ memory
>>>>> and store the allocated pointer in mempool struct(as *pool) and use pool for
>>>>> pointing other data structures as some implementation need only
>>>>> limited bytes to store the external pool manager specific context.
>>>>>
>>>>> In order to fix this problem, We may classify fast path and slow path
>>>>> elements in struct rte_mempool and move all fast path elements in first
>>>>> cache line and create an empty opaque space in the remaining bytes in the
>>>>> cache line so that if the external pool manager needs only limited space
>>>>> then it is not required to allocate the separate memory to save the
>>>>> per core cache  in fast-path
>>>>>
>>>>> something like below,
>>>>> union {
>>>>> 	void *pool;
>>>>> 	uint64_t val;
>>>>> 	uint8_t extra_mem[16] // available free bytes in fast path cache line
>>>>>
>>>>> }
>>>> Something for the future, perhaps? Will the 8-bits in the flags suffice for
>>>> now?
>>> OK. But simple anonymous union for same type should be OK add now? Not
>>> much change I believe, If its difficult then postpone it
>>>
>>> union {
>>> 	void *pool;
>>> 	uint64_t val;
>>> }
>> I'm ok with the simple union with (void *) and (uint64_t).
>> Maybe "val" should be replaced by something more appropriate.
>> Is "pool_id" a better name?
> How about "opaque"?

I think I would lean towards pool_id in this case.


Regards,
David.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v5 1/3] mempool: support external handler
  2016-06-01 10:46                                 ` Hunt, David
@ 2016-06-01 11:18                                   ` Jerin Jacob
  0 siblings, 0 replies; 238+ messages in thread
From: Jerin Jacob @ 2016-06-01 11:18 UTC (permalink / raw)
  To: Hunt, David; +Cc: Olivier MATZ, dev, yuanhan.liu, pmatilai, Jan Viktorin

On Wed, Jun 01, 2016 at 11:46:20AM +0100, Hunt, David wrote:
> 
> 
> On 5/31/2016 10:11 PM, Jerin Jacob wrote:
> > On Tue, May 31, 2016 at 10:41:00PM +0200, Olivier MATZ wrote:
> > > Hi,
> > > 
> > > On 05/31/2016 06:03 PM, Jerin Jacob wrote:
> > > > On Tue, May 31, 2016 at 04:37:02PM +0100, Hunt, David wrote:
> > > > > 
> > > > > On 5/31/2016 9:53 AM, Jerin Jacob wrote:
> > > > > > On Mon, May 30, 2016 at 12:27:26PM +0100, Hunt, David wrote:
> > > > > > > New mempool handlers will use rte_mempool_create_empty(),
> > > > > > > rte_mempool_set_handler(),
> > > > > > > then rte_mempool_populate_*(). These three functions are new to this
> > > > > > > release, to no problem
> > > > > > Having separate APIs for external pool-manager create is worrisome in
> > > > > > application perspective. Is it possible to have rte_mempool_[xmem]_create
> > > > > > for the both external and existing SW pool manager and make
> > > > > > rte_mempool_create_empty and rte_mempool_populate_*  internal functions.
> > > > > > 
> > > > > > IMO, We can do that by selecting  specific rte_mempool_set_handler()
> > > > > > based on _flags_ encoding, something like below
> > > > > > 
> > > > > > bit 0 - 16   // generic bits uses across all the pool managers
> > > > > > bit 16 - 23  // pool handler specific flags bits
> > > > > > bit 24 - 31  // to select the specific pool manager(Up to 256 different flavors of
> > > > > > pool managers, For backward compatibility, make '0'(in 24-31) to select
> > > > > > existing SW pool manager.
> > > > > > 
> > > > > > and applications can choose the handlers by selecting the flag in
> > > > > > rte_mempool_[xmem]_create, That way it will be easy in testpmd or any other
> > > > > > applications to choose the pool handler from command line etc in future.
> > > > > There might be issues with the 8-bit handler number, as we'd have to add an
> > > > > api call to
> > > > > first get the index of a given hander by name, then OR it into the flags.
> > > > > That would mean
> > > > > also extra API calls for the non-default external handlers. I do agree with
> > > > > the handler-specific
> > > > > bits though.
> > > > That would be an internal API(upper 8 bits to handler name). Right ?
> > > > Seems to be OK for me.
> > > > 
> > > > > Having the _empty and _set_handler  APIs seems to me to be OK for the
> > > > > moment. Maybe Olivier could comment?
> > > > > 
> > > > But need 3 APIs. Right? _empty , _set_handler and _populate ? I believe
> > > > it is better reduce the public API in spec where ever possible ?
> > > > 
> > > > Maybe Olivier could comment ?
> > > Well, I think having 3 different functions is not a problem if the API
> > > is clearer.
> > > 
> > > In my opinion, the following:
> > > 	rte_mempool_create_empty()
> > > 	rte_mempool_set_handler()
> > > 	rte_mempool_populate()
> > > 
> > > is clearer than:
> > > 	rte_mempool_create(15 args)
> > But proposed scheme is not adding any new arguments to
> > rte_mempool_create. It just extending the existing flag.
> > 
> > rte_mempool_create(15 args) is still their as API for internal pool
> > creation.
> > 
> > > Splitting the flags into 3 groups, with one not beeing flags but a
> > > pool handler number looks overcomplicated from a user perspective.
> > I am concerned with seem less integration with existing applications,
> > IMO, Its not worth having separate functions for external vs internal
> > pool creation for application(now each every applications has to added this
> > logic every where for no good reason), just my 2 cents.
> 
> I think that there is always going to be some  extra code in the
> applications
>  that want to use an external mempool. The _set_handler approach does
> create, set_hander, populate. The Flags method queries the handler list to
> get the index, sets the flags bits, then calls create. Both methods will
> work.

I was suggesting flags like TXQ in ethdev where application just
selects the mode. Not sure why application has to get the index first.

some thing like,
#define ETH_TXQ_FLAGS_NOMULTSEGS 0x0001 /**< nb_segs=1 for all mbufs */
#define ETH_TXQ_FLAGS_NOREFCOUNT 0x0002 /**< refcnt can be ignored */
#define ETH_TXQ_FLAGS_NOMULTMEMP 0x0004 /**< all bufs come from same mempool */ 

Anyway, Looks like  no one else much bothered about external pool
manger creation API being different. So, I given up. No objections from my side :-)

> 
> But I think the _set_handler approach is more user friendly, therefore that
> it the method I would lean towards.
> 
> > > > > > and we can remove "mbuf: get default mempool handler from configuration"
> > > > > > change-set OR just add if RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is defined then set
> > > > > > the same with rte_mempool_set_handler in rte_mempool_[xmem]_create.
> > > > > > 
> > > > > > What do you think?
> > > > > The "configuration" patch is to allow users to quickly change the mempool
> > > > > handler
> > > > > by changing RTE_MBUF_DEFAULT_MEMPOOL_HANDLER to another string of a known
> > > > > handler. It could just as easily be left out and use the rte_mempool_create.
> > > > > 
> > > > Yes, I understand, but I am trying to avoid build time constant. IMO, It
> > > > would be better by default RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is not
> > > > defined in config. and for quick change developers can introduce the build
> > > > with RTE_MBUF_DEFAULT_MEMPOOL_HANDLER="specific handler"
> > > My understanding of the compile-time configuration option was
> > > to allow a specific architecture to define a specific hw-assisted
> > > handler by default.
> > > 
> > > Indeed, if there is no such need for now, we may remove it. But
> > > we need a way to select another handler, at least in test-pmd
> > > (in command line arguments?).
> > like txflags in testpmd, IMO, mempool flags will help to select the handlers
> > seamlessly as suggest above.
> > 
> > If we are _not_ taking the flags based selection scheme then it makes to
> > keep RTE_MBUF_DEFAULT_MEMPOOL_HANDLER
> 
> see comment above

Try to add some means to select the external handler for existing
applications so that we can test existing applications in different modes.

Thanks,
Jerin

> 
> > > > > > > > 2) IMO, It is better to change void *pool in struct rte_mempool to
> > > > > > > > anonymous union type, something like below, so that mempool
> > > > > > > > implementation can choose the best type.
> > > > > > > > 	union {
> > > > > > > > 		void *pool;
> > > > > > > > 		uint64_t val;
> > > > > > > > 	}
> > > > > > > Could we do this by using the union for the *pool_config suggested above,
> > > > > > > would that give
> > > > > > > you what you need?
> > > > > > It would be an extra overhead for external pool manager to _alloc_ memory
> > > > > > and store the allocated pointer in mempool struct(as *pool) and use pool for
> > > > > > pointing other data structures as some implementation need only
> > > > > > limited bytes to store the external pool manager specific context.
> > > > > > 
> > > > > > In order to fix this problem, We may classify fast path and slow path
> > > > > > elements in struct rte_mempool and move all fast path elements in first
> > > > > > cache line and create an empty opaque space in the remaining bytes in the
> > > > > > cache line so that if the external pool manager needs only limited space
> > > > > > then it is not required to allocate the separate memory to save the
> > > > > > per core cache  in fast-path
> > > > > > 
> > > > > > something like below,
> > > > > > union {
> > > > > > 	void *pool;
> > > > > > 	uint64_t val;
> > > > > > 	uint8_t extra_mem[16] // available free bytes in fast path cache line
> > > > > > 
> > > > > > }
> > > > > Something for the future, perhaps? Will the 8-bits in the flags suffice for
> > > > > now?
> > > > OK. But simple anonymous union for same type should be OK add now? Not
> > > > much change I believe, If its difficult then postpone it
> > > > 
> > > > union {
> > > > 	void *pool;
> > > > 	uint64_t val;
> > > > }
> > > I'm ok with the simple union with (void *) and (uint64_t).
> > > Maybe "val" should be replaced by something more appropriate.
> > > Is "pool_id" a better name?
> > How about "opaque"?
> 
> I think I would lean towards pool_id in this case.
> 
> 
> Regards,
> David.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [dpdk-dev,v5,1/3] mempool: support external handler
  2016-05-31 20:40                   ` Olivier MATZ
  2016-06-01  9:39                     ` Hunt, David
@ 2016-06-01 12:30                     ` Jan Viktorin
  1 sibling, 0 replies; 238+ messages in thread
From: Jan Viktorin @ 2016-06-01 12:30 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: Hunt, David, dev, yuanhan.liu, pmatilai, jerin.jacob

On Tue, 31 May 2016 22:40:59 +0200
Olivier MATZ <olivier.matz@6wind.com> wrote:

> Hi,
> 
> On 05/31/2016 03:47 PM, Hunt, David wrote:
> > On 5/31/2016 1:06 PM, Jan Viktorin wrote:  
> >> On Tue, 31 May 2016 10:09:42 +0100
> >> "Hunt, David" <david.hunt@intel.com> wrote:
> >>  
> >>> The *p pointer is the opaque data for a given mempool handler (ring,
> >>> array, linked list, etc)  
> >> Again, doc comments...
> >>
> >> I don't like the obj_table representation to be an array of void *. I
> >> could see
> >> it already in DPDK for defining Ethernet driver queues, so, it's
> >> probably not
> >> an issue. I just say, I would prefer some basic type safety like
> >>
> >> struct rte_mempool_obj {
> >>     void *p;
> >> };
> >>
> >> Is there somebody with different opinions?
> >>
> >> [...]  
> > 
> > Comments added. I've left as a void* for the moment.  
> 
> Jan, could you please detail why you think having a
> rte_mempool_obj structure brings more safety?

First, void * does not say anything about the purpose of the argument.
So, anybody, who is not familiar with the mempool internals would be
lost for a while (as I was when studying the Ethernet queue API of DPDK).

The type safety (in C, LoL) here is in the means of messing up order of arguments.
When there are more void * args, you can accidently pass something wrong.
DPDK has quite strict compiler settings, however, I don't consider it to be
a good practice in general.

When you have a named struct or a typedef, its definition usually contains doc
comments describing its purposes. Nobody usually writes good doc comments for
void * arguments of functions.

> 
> For now, I'm in favor of keeping the array of void *, because
> that's what we use in other mempool or ring functions.

It was just a suggestion... I don't consider this to be an issue (as stated
earlier).

Jan

> 
> 
> >>>>> +/** Structure defining a mempool handler. */  
> >>>> Later in the text, I suggested to rename rte_mempool_handler to
> >>>> rte_mempool_ops.
> >>>> I believe that it explains the purpose of this struct better. It
> >>>> would improve
> >>>> consistency in function names (the *_ext_* mark is very strange and
> >>>> inconsistent).  
> >>> I agree. I've gone through all the code and renamed to
> >>> rte_mempool_handler_ops.  
> >> Ok. I meant rte_mempool_ops because I find the word "handler" to be
> >> redundant.  
> > 
> > I prefer the use of the word handler, unless others also have opinions
> > either way?  
> 
> Well, I think rte_mempool_ops is clear enough, and shorter,
> so I'd vote for it.
> 
> 
> Regards,
> Olivier



-- 
   Jan Viktorin                  E-mail: Viktorin@RehiveTech.com
   System Architect              Web:    www.RehiveTech.com
   RehiveTech
   Brno, Czech Republic

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v6 0/5] mempool: add external mempool manager
  2016-05-19 13:44       ` mempool: external mempool manager David Hunt
                           ` (2 preceding siblings ...)
  2016-05-19 13:45         ` [PATCH v5 3/3] mbuf: get default mempool handler from configuration David Hunt
@ 2016-06-01 16:19         ` David Hunt
  2016-06-01 16:19           ` [PATCH v6 1/5] mempool: support external handler David Hunt
                             ` (5 more replies)
  3 siblings, 6 replies; 238+ messages in thread
From: David Hunt @ 2016-06-01 16:19 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob

Here's the latest version of the External Mempool Manager patchset.
It's re-based on top of the latest head as of 1st June 2016, including
Olivier's 35-part patch series on mempool re-org [1]

[1] http://dpdk.org/ml/archives/dev/2016-May/039229.html

Note: After applying the last patch, run "make config ..." before
compiling. It introduces a config file change. 

Note: Hopefully I've addressed all the extensive comments over the
last week. If I've missed any, please let me know, as it would
not have been intentional. I hop I've responded to all comments
via email on the mailing list. 

v6 changes:

 * Moved the flags handling from rte_mempool_create_empty to
   rte_mempool_create, as it's only there for backward compatibility
 * Various comment additions and cleanup
 * Renamed rte_mempool_handler to rte_mempool_ops
 * Added a union for *pool and u64 pool_id in struct rte_mempool
 * split the original patch into a few parts for easier review.
 * rename functions with _ext_ to _ops_.
 * addressed review comments
 * renamed put and get functions to enqueue and dequeue
 * changed occurences of rte_mempool_ops to const, as they
   contain function pointers (security)
 * split out the default external mempool handler into a separate
   patch for easier review

v5 changes:
 * rebasing, as it is dependent on another patch series [1]

v4 changes (Olivier Matz):
 * remove the rte_mempool_create_ext() function. To change the handler, the
   user has to do the following:
   - mp = rte_mempool_create_empty()
   - rte_mempool_set_handler(mp, "my_handler")
   - rte_mempool_populate_default(mp)
   This avoids to add another function with more than 10 arguments, duplicating
   the doxygen comments
 * change the api of rte_mempool_alloc_t: only the mempool pointer is required
   as all information is available in it
 * change the api of rte_mempool_free_t: remove return value
 * move inline wrapper functions from the .c to the .h (else they won't be
   inlined). This implies to have one header file (rte_mempool.h), or it
   would have generate cross dependencies issues.
 * remove now unused MEMPOOL_F_INT_HANDLER (note: it was misused anyway due
   to the use of && instead of &)
 * fix build in debug mode (__MEMPOOL_STAT_ADD(mp, put_pool, n) remaining)
 * fix build with shared libraries (global handler has to be declared in
   the .map file)
 * rationalize #include order
 * remove unused function rte_mempool_get_handler_name()
 * rename some structures, fields, functions
 * remove the static in front of rte_tailq_elem rte_mempool_tailq (comment
   from Yuanhan)
 * test the ext mempool handler in the same file than standard mempool tests,
   avoiding to duplicate the code
 * rework the custom handler in mempool_test
 * rework a bit the patch selecting default mbuf pool handler
 * fix some doxygen comments

v3 changes:
 * simplified the file layout, renamed to rte_mempool_handler.[hc]
 * moved the default handlers into rte_mempool_default.c
 * moved the example handler out into app/test/test_ext_mempool.c
 * removed is_mc/is_mp change, slight perf degredation on sp cached operation
 * removed stack hanler, may re-introduce at a later date
 * Changes out of code reviews

v2 changes:
 * There was a lot of duplicate code between rte_mempool_xmem_create and
   rte_mempool_create_ext. This has now been refactored and is now
   hopefully cleaner.
 * The RTE_NEXT_ABI define is now used to allow building of the library
   in a format that is compatible with binaries built against previous
   versions of DPDK.
 * Changes out of code reviews. Hopefully I've got most of them included.

The External Mempool Manager is an extension to the mempool API that allows
users to add and use an external mempool manager, which allows external memory
subsystems such as external hardware memory management systems and software
based memory allocators to be used with DPDK.

The existing API to the internal DPDK mempool manager will remain unchanged
and will be backward compatible. However, there will be an ABI breakage, as
the mempool struct is changing. These changes are all contained withing
RTE_NEXT_ABI defs, and the current or next code can be changed with
the CONFIG_RTE_NEXT_ABI config setting

There are two aspects to external mempool manager.
  1. Adding the code for your new mempool handler. This is achieved by adding a
     new mempool handler source file into the librte_mempool library, and
     using the REGISTER_MEMPOOL_HANDLER macro.
  2. Using the new API to call rte_mempool_create_empty and
     rte_mempool_set_handler to create a new mempool
     using the name parameter to identify which handler to use.

New API calls added
 1. A new rte_mempool_create_empty() function
 2. rte_mempool_set_handler() which sets the mempool's handler
 3. An rte_mempool_populate_default() and rte_mempool_populate_anon() functions
    which populates the mempool using the relevant handler

Several external mempool managers may be used in the same application. A new
mempool can then be created by using the new 'create' function, providing the
mempool handler name to point the mempool to the relevant mempool manager
callback structure.

The old 'create' function can still be called by legacy programs, and will
internally work out the mempool handle based on the flags provided (single
producer, single consumer, etc). By default handles are created internally to
implement the built-in DPDK mempool manager and mempool types.

The external mempool manager needs to provide the following functions.
 1. alloc     - allocates the mempool memory, and adds each object onto a ring
 2. put       - puts an object back into the mempool once an application has
                finished with it
 3. get       - gets an object from the mempool for use by the application
 4. get_count - gets the number of available objects in the mempool
 5. free      - frees the mempool memory

Every time a get/put/get_count is called from the application/PMD, the
callback for that mempool is called. These functions are in the fastpath,
and any unoptimised handlers may limit performance.

The new APIs are as follows:

1. rte_mempool_create_empty

struct rte_mempool *
rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
    unsigned cache_size, unsigned private_data_size,
    int socket_id, unsigned flags);

2. rte_mempool_set_handler()

int
rte_mempool_set_handler(struct rte_mempool *mp, const char *name);

3. rte_mempool_populate_default()

int rte_mempool_populate_default(struct rte_mempool *mp);

4. rte_mempool_populate_anon()

int rte_mempool_populate_anon(struct rte_mempool *mp);

Please see rte_mempool.h for further information on the parameters.


The important thing to note is that the mempool handler is passed by name
to rte_mempool_set_handler, which looks through the handler array to
get the handler index, which is then stored in the rte_memool structure. This
allow multiple processes to use the same mempool, as the function pointers
are accessed via handler index.

The mempool handler structure contains callbacks to the implementation of
the handler, and is set up for registration as follows:

static const struct rte_mempool_handler handler_sp_mc = {
    .name = "ring_sp_mc",
    .alloc = rte_mempool_common_ring_alloc,
    .put = common_ring_sp_put,
    .get = common_ring_mc_get,
    .get_count = common_ring_get_count,
    .free = common_ring_free,
};

And then the following macro will register the handler in the array of handlers

REGISTER_MEMPOOL_HANDLER(handler_mp_mc);

For and example of a simple malloc based mempool manager, see
lib/librte_mempool/custom_mempool.c

For an example of API usage, please see app/test/test_mempool.c, which
implements a rudimentary "custom_handler" mempool manager using simple mallocs
for each mempool object. This file also contains the callbacks and self
registration for the new handler.

David Hunt (4):
  mempool: support external handler
  mempool: remove rte_ring from rte_mempool struct
  mempool: add default external mempool handler
  mbuf: get default mempool handler from configuration

Olivier Matz (1):
  app/test: test external mempool handler

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v6 1/5] mempool: support external handler
  2016-06-01 16:19         ` [PATCH v6 0/5] mempool: add external mempool manager David Hunt
@ 2016-06-01 16:19           ` David Hunt
  2016-06-01 16:29             ` Hunt, David
  2016-06-01 17:54             ` Jan Viktorin
  2016-06-01 16:19           ` [PATCH v6 2/5] mempool: remove rte_ring from rte_mempool struct David Hunt
                             ` (4 subsequent siblings)
  5 siblings, 2 replies; 238+ messages in thread
From: David Hunt @ 2016-06-01 16:19 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, David Hunt

Until now, the objects stored in a mempool were internally stored in a
ring. This patch introduces the possibility to register external handlers
replacing the ring.

The default behavior remains unchanged, but calling the new function
rte_mempool_set_handler() right after rte_mempool_create_empty() allows
the user to change the handler that will be used when populating
the mempool.

v7 changes:
  * Moved the flags handling from rte_mempool_create_empty to
    rte_mempool_create, as it's only there for backward compatibility
  * Various comment additions and cleanup
  * Renamed rte_mempool_handler to rte_mempool_ops
  * Added a union for *pool and u64 pool_id in struct rte_mempool

v6 changes:
  * split the original patch into a few parts for easier review.
  * rename functions with _ext_ to _ops_.
  * addressed some review comments
  * renamed put and get functions to enqueue and dequeue
  * renamed rte_mempool_handler struct to rte_mempool_handler_ops
  * changed occurences of rte_mempool_handler_ops to const, as they
    contain function pointers (security)
  * added some extra comments

v5 changes: rebasing on top of 35 patch set mempool work.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
 lib/librte_mempool/Makefile              |   1 +
 lib/librte_mempool/rte_mempool.c         |  71 ++++------
 lib/librte_mempool/rte_mempool.h         | 235 ++++++++++++++++++++++++++++---
 lib/librte_mempool/rte_mempool_handler.c | 141 +++++++++++++++++++
 4 files changed, 384 insertions(+), 64 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_handler.c

diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 43423e0..793745f 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -42,6 +42,7 @@ LIBABIVER := 2
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_handler.c
 # install includes
 SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
 
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index b54de43..9f34d30 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -148,7 +148,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, phys_addr_t physaddr)
 #endif
 
 	/* enqueue in ring */
-	rte_ring_sp_enqueue(mp->ring, obj);
+	rte_mempool_ops_enqueue_bulk(mp, &obj, 1);
 }
 
 /* call obj_cb() for each mempool element */
@@ -303,40 +303,6 @@ rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	return (size_t)paddr_idx << pg_shift;
 }
 
-/* create the internal ring */
-static int
-rte_mempool_ring_create(struct rte_mempool *mp)
-{
-	int rg_flags = 0, ret;
-	char rg_name[RTE_RING_NAMESIZE];
-	struct rte_ring *r;
-
-	ret = snprintf(rg_name, sizeof(rg_name),
-		RTE_MEMPOOL_MZ_FORMAT, mp->name);
-	if (ret < 0 || ret >= (int)sizeof(rg_name))
-		return -ENAMETOOLONG;
-
-	/* ring flags */
-	if (mp->flags & MEMPOOL_F_SP_PUT)
-		rg_flags |= RING_F_SP_ENQ;
-	if (mp->flags & MEMPOOL_F_SC_GET)
-		rg_flags |= RING_F_SC_DEQ;
-
-	/* Allocate the ring that will be used to store objects.
-	 * Ring functions will return appropriate errors if we are
-	 * running as a secondary process etc., so no checks made
-	 * in this function for that condition.
-	 */
-	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
-		mp->socket_id, rg_flags);
-	if (r == NULL)
-		return -rte_errno;
-
-	mp->ring = r;
-	mp->flags |= MEMPOOL_F_RING_CREATED;
-	return 0;
-}
-
 /* free a memchunk allocated with rte_memzone_reserve() */
 static void
 rte_mempool_memchunk_mz_free(__rte_unused struct rte_mempool_memhdr *memhdr,
@@ -354,7 +320,7 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
 	void *elt;
 
 	while (!STAILQ_EMPTY(&mp->elt_list)) {
-		rte_ring_sc_dequeue(mp->ring, &elt);
+		rte_mempool_ops_dequeue_bulk(mp, &elt, 1);
 		(void)elt;
 		STAILQ_REMOVE_HEAD(&mp->elt_list, next);
 		mp->populated_size--;
@@ -383,13 +349,16 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	unsigned i = 0;
 	size_t off;
 	struct rte_mempool_memhdr *memhdr;
-	int ret;
 
 	/* create the internal ring if not already done */
 	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
-		ret = rte_mempool_ring_create(mp);
-		if (ret < 0)
-			return ret;
+		rte_errno = 0;
+		mp->pool = rte_mempool_ops_alloc(mp);
+		if (mp->pool == NULL) {
+			if (rte_errno == 0)
+				return -EINVAL;
+			return -rte_errno;
+		}
 	}
 
 	/* mempool is already populated */
@@ -703,7 +672,7 @@ rte_mempool_free(struct rte_mempool *mp)
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
 
 	rte_mempool_free_memchunks(mp);
-	rte_ring_free(mp->ring);
+	rte_mempool_ops_free(mp);
 	rte_memzone_free(mp->mz);
 }
 
@@ -815,6 +784,20 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 		RTE_PTR_ADD(mp, MEMPOOL_HEADER_SIZE(mp, 0));
 
 	te->data = mp;
+
+	/*
+	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
+	 * set the correct index into the handler table.
+	 */
+	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
+		rte_mempool_set_handler(mp, "ring_sp_sc");
+	else if (flags & MEMPOOL_F_SP_PUT)
+		rte_mempool_set_handler(mp, "ring_sp_mc");
+	else if (flags & MEMPOOL_F_SC_GET)
+		rte_mempool_set_handler(mp, "ring_mp_sc");
+	else
+		rte_mempool_set_handler(mp, "ring_mp_mc");
+
 	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
 	TAILQ_INSERT_TAIL(mempool_list, te, next);
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
@@ -930,7 +913,7 @@ rte_mempool_count(const struct rte_mempool *mp)
 	unsigned count;
 	unsigned lcore_id;
 
-	count = rte_ring_count(mp->ring);
+	count = rte_mempool_ops_get_count(mp);
 
 	if (mp->cache_size == 0)
 		return count;
@@ -1123,7 +1106,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 
 	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
 	fprintf(f, "  flags=%x\n", mp->flags);
-	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
+	fprintf(f, "  pool=%p\n", mp->pool);
 	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->mz->phys_addr);
 	fprintf(f, "  nb_mem_chunks=%u\n", mp->nb_mem_chunks);
 	fprintf(f, "  size=%"PRIu32"\n", mp->size);
@@ -1144,7 +1127,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 	}
 
 	cache_count = rte_mempool_dump_cache(f, mp);
-	common_count = rte_ring_count(mp->ring);
+	common_count = rte_mempool_ops_get_count(mp);
 	if ((cache_count + common_count) > mp->size)
 		common_count = mp->size - cache_count;
 	fprintf(f, "  common_pool_count=%u\n", common_count);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 60339bd..b659565 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -67,6 +67,7 @@
 #include <inttypes.h>
 #include <sys/queue.h>
 
+#include <rte_spinlock.h>
 #include <rte_log.h>
 #include <rte_debug.h>
 #include <rte_lcore.h>
@@ -204,9 +205,13 @@ struct rte_mempool_memhdr {
 struct rte_mempool {
 	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
 	struct rte_ring *ring;           /**< Ring to store objects. */
+	union {
+		void *pool;              /**< Ring or pool to store objects */
+		uint64_t *pool_id;       /**< External mempool identifier */
+	};
 	const struct rte_memzone *mz;    /**< Memzone where pool is allocated */
 	int flags;                       /**< Flags of the mempool. */
-	int socket_id;                   /**< Socket id passed at mempool creation. */
+	int socket_id;                   /**< Socket id passed at create */
 	uint32_t size;                   /**< Max size of the mempool. */
 	uint32_t cache_size;             /**< Size of per-lcore local cache. */
 	uint32_t cache_flushthresh;
@@ -217,6 +222,14 @@ struct rte_mempool {
 	uint32_t trailer_size;           /**< Size of trailer (after elt). */
 
 	unsigned private_data_size;      /**< Size of private data. */
+	/**
+	 * Index into rte_mempool_handler_table array of mempool handler ops
+	 * structs, which contain callback function pointers.
+	 * We're using an index here rather than pointers to the callbacks
+	 * to facilitate any secondary processes that may want to use
+	 * this mempool.
+	 */
+	int32_t handler_idx;
 
 	struct rte_mempool_cache *local_cache; /**< Per-lcore local cache */
 
@@ -325,6 +338,199 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 #define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
 #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
 
+#define RTE_MEMPOOL_HANDLER_NAMESIZE 32 /**< Max length of handler name. */
+
+/**
+ * typedef for allocating the external pool.
+ * The handler's alloc function does whatever it needs to do to grab memory
+ * for this handler, and sets the *pool opaque pointer in the rte_mempool
+ * struct. In the default handler, *pool points to a ring,in other handlers,
+ * it will mostlikely point to a different type of data structure.
+ * It will be transparent to the application programmer.
+ */
+typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp);
+
+/** Free the external pool opaque data (that pointed to by *pool) */
+typedef void (*rte_mempool_free_t)(void *p);
+
+/**
+ * Put an object in the external pool.
+ * The *p pointer is the opaque data for a given mempool handler (ring,
+ * array, linked list, etc)
+ */
+typedef int (*rte_mempool_put_t)(void *p, void * const *obj_table, unsigned n);
+
+/** Get an object from the external pool. */
+typedef int (*rte_mempool_get_t)(void *p, void **obj_table, unsigned n);
+
+/** Return the number of available objects in the external pool. */
+typedef unsigned (*rte_mempool_get_count)(void *p);
+
+/** Structure defining a mempool handler. */
+struct rte_mempool_ops {
+	char name[RTE_MEMPOOL_HANDLER_NAMESIZE]; /**< Name of mempool handler */
+	rte_mempool_alloc_t alloc;       /**< Allocate the external pool. */
+	rte_mempool_free_t free;         /**< Free the external pool. */
+	rte_mempool_put_t put;           /**< Put an object. */
+	rte_mempool_get_t get;           /**< Get an object. */
+	rte_mempool_get_count get_count; /**< Get the number of available objs. */
+} __rte_cache_aligned;
+
+#define RTE_MEMPOOL_MAX_HANDLER_IDX 16  /**< Max registered handlers */
+
+/**
+ * Structure storing the table of registered handlers, each of which contain
+ * the function pointers for the mempool handler functions.
+ * Each process has it's own storage for this handler struct aray so that
+ * the mempools can be shared across primary and secondary processes.
+ * The indices used to access the array are valid across processes, whereas
+ * any function pointers stored directly in the mempool struct would not be.
+ * This results in us simply having "handler_idx" in the mempool struct.
+ */
+struct rte_mempool_handler_table {
+	rte_spinlock_t sl;     /**< Spinlock for add/delete. */
+	uint32_t num_handlers; /**< Number of handlers in the table. */
+	/**
+	 * Storage for all possible handlers.
+	 */
+	struct rte_mempool_ops handler_ops[RTE_MEMPOOL_MAX_HANDLER_IDX];
+} __rte_cache_aligned;
+
+/** Array of registered handlers */
+extern struct rte_mempool_handler_table rte_mempool_handler_table;
+
+/**
+ * @internal Get the mempool handler from its index.
+ *
+ * @param handler_idx
+ *   The index of the handler in the handler table. It must be a valid
+ *   index: (0 <= idx < num_handlers).
+ * @return
+ *   The pointer to the handler in the table.
+ */
+static inline struct rte_mempool_ops *
+rte_mempool_handler_get(int handler_idx)
+{
+	return &rte_mempool_handler_table.handler_ops[handler_idx];
+}
+
+/**
+ * @internal wrapper for external mempool manager alloc callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   The opaque pointer to the external pool.
+ */
+void *
+rte_mempool_ops_alloc(struct rte_mempool *mp);
+
+/**
+ * @internal wrapper for external mempool manager get callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to get.
+ * @return
+ *   - 0: Success; got n objects.
+ *   - <0: Error; code of handler get function.
+ */
+static inline int
+rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
+		void **obj_table, unsigned n)
+{
+	struct rte_mempool_ops *handler;
+
+	handler = rte_mempool_handler_get(mp->handler_idx);
+	return handler->get(mp->pool, obj_table, n);
+}
+
+/**
+ * @internal wrapper for external mempool manager put callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to put.
+ * @return
+ *   - 0: Success; n objects supplied.
+ *   - <0: Error; code of handler put function.
+ */
+static inline int
+rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	struct rte_mempool_ops *handler;
+
+	handler = rte_mempool_handler_get(mp->handler_idx);
+	return handler->put(mp->pool, obj_table, n);
+}
+
+/**
+ * @internal wrapper for external mempool manager get_count callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   The number of available objects in the external pool.
+ */
+unsigned
+rte_mempool_ops_get_count(const struct rte_mempool *mp);
+
+/**
+ * @internal wrapper for external mempool manager free callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ */
+void
+rte_mempool_ops_free(struct rte_mempool *mp);
+
+/**
+ * Set the handler of a mempool
+ *
+ * This can only be done on a mempool that is not populated, i.e. just after
+ * a call to rte_mempool_create_empty().
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param name
+ *   Name of the handler.
+ * @return
+ *   - 0: Sucess; the new handler is configured.
+ *   - EINVAL - Invalid handler name provided
+ *   - EEXIST - mempool already has a handler assigned
+ */
+int
+rte_mempool_set_handler(struct rte_mempool *mp, const char *name);
+
+/**
+ * Register an external pool handler.
+ *
+ * @param h
+ *   Pointer to the external pool handler
+ * @return
+ *   - >=0: Sucess; return the index of the handler in the table.
+ *   - EINVAL - missing callbacks while registering handler
+ *   - ENOSPC - the maximum number of handlers has been reached
+ */
+int rte_mempool_handler_register(const struct rte_mempool_ops *h);
+
+/**
+ * Macro to statically register an external pool handler.
+ */
+#define MEMPOOL_REGISTER_HANDLER(h)					\
+	void mp_hdlr_init_##h(void);					\
+	void __attribute__((constructor, used)) mp_hdlr_init_##h(void)	\
+	{								\
+		rte_mempool_handler_register(&h);			\
+	}
+
 /**
  * An object callback function for mempool.
  *
@@ -774,7 +980,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 	cache->len += n;
 
 	if (cache->len >= flushthresh) {
-		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
+		rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache_size],
 				cache->len - cache_size);
 		cache->len = cache_size;
 	}
@@ -785,19 +991,10 @@ ring_enqueue:
 
 	/* push remaining objects in ring */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	if (is_mp) {
-		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
-	else {
-		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
+	if (rte_mempool_ops_enqueue_bulk(mp, obj_table, n) < 0)
+		rte_panic("cannot put objects in mempool\n");
 #else
-	if (is_mp)
-		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
-	else
-		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
+	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);
 #endif
 }
 
@@ -922,7 +1119,7 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
  */
 static inline int __attribute__((always_inline))
 __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
-		   unsigned n, int is_mc)
+		   unsigned n, __rte_unused int is_mc)
 {
 	int ret;
 	struct rte_mempool_cache *cache;
@@ -945,7 +1142,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 		uint32_t req = n + (cache_size - cache->len);
 
 		/* How many do we require i.e. number to fill the cache + the request */
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
+		ret = rte_mempool_ops_dequeue_bulk(mp,
+			&cache->objs[cache->len], req);
 		if (unlikely(ret < 0)) {
 			/*
 			 * In the offchance that we are buffer constrained,
@@ -972,10 +1170,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 ring_dequeue:
 
 	/* get remaining objects from ring */
-	if (is_mc)
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
-	else
-		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
+	ret = rte_mempool_ops_dequeue_bulk(mp, obj_table, n);
 
 	if (ret < 0)
 		__MEMPOOL_STAT_ADD(mp, get_fail, n);
diff --git a/lib/librte_mempool/rte_mempool_handler.c b/lib/librte_mempool/rte_mempool_handler.c
new file mode 100644
index 0000000..ed85d65
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_handler.c
@@ -0,0 +1,141 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2016 6WIND S.A.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_mempool.h>
+
+/* indirect jump table to support external memory pools */
+struct rte_mempool_handler_table rte_mempool_handler_table = {
+	.sl =  RTE_SPINLOCK_INITIALIZER ,
+	.num_handlers = 0
+};
+
+/* add a new handler in rte_mempool_handler_table, return its index */
+int
+rte_mempool_handler_register(const struct rte_mempool_ops *h)
+{
+	struct rte_mempool_ops *handler;
+	int16_t handler_idx;
+
+	rte_spinlock_lock(&rte_mempool_handler_table.sl);
+
+	if (rte_mempool_handler_table.num_handlers >=
+			RTE_MEMPOOL_MAX_HANDLER_IDX) {
+		rte_spinlock_unlock(&rte_mempool_handler_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Maximum number of mempool handlers exceeded\n");
+		return -ENOSPC;
+	}
+
+	if (h->put == NULL || h->get == NULL || h->get_count == NULL) {
+		rte_spinlock_unlock(&rte_mempool_handler_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Missing callback while registering mempool handler\n");
+		return -EINVAL;
+	}
+
+	handler_idx = rte_mempool_handler_table.num_handlers++;
+	handler = &rte_mempool_handler_table.handler_ops[handler_idx];
+	snprintf(handler->name, sizeof(handler->name), "%s", h->name);
+	handler->alloc = h->alloc;
+	handler->put = h->put;
+	handler->get = h->get;
+	handler->get_count = h->get_count;
+
+	rte_spinlock_unlock(&rte_mempool_handler_table.sl);
+
+	return handler_idx;
+}
+
+/* wrapper to allocate an external pool handler */
+void *
+rte_mempool_ops_alloc(struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *handler;
+
+	handler = rte_mempool_handler_get(mp->handler_idx);
+	if (handler->alloc == NULL)
+		return NULL;
+	return handler->alloc(mp);
+}
+
+/* wrapper to free an external pool handler */
+void
+rte_mempool_ops_free(struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *handler;
+
+	handler = rte_mempool_handler_get(mp->handler_idx);
+	if (handler->free == NULL)
+		return;
+	return handler->free(mp);
+}
+
+/* wrapper to get available objects in an external pool handler */
+unsigned int
+rte_mempool_ops_get_count(const struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *handler;
+
+	handler = rte_mempool_handler_get(mp->handler_idx);
+	return handler->get_count(mp->pool);
+}
+
+/* sets a handler previously registered by rte_mempool_handler_register */
+int
+rte_mempool_set_handler(struct rte_mempool *mp, const char *name)
+{
+	struct rte_mempool_ops *handler = NULL;
+	unsigned i;
+
+	/* too late, the mempool is already populated */
+	if (mp->flags & MEMPOOL_F_RING_CREATED)
+		return -EEXIST;
+
+	for (i = 0; i < rte_mempool_handler_table.num_handlers; i++) {
+		if (!strcmp(name,
+				rte_mempool_handler_table.handler_ops[i].name)) {
+			handler = &rte_mempool_handler_table.handler_ops[i];
+			break;
+		}
+	}
+
+	if (handler == NULL)
+		return -EINVAL;
+
+	mp->handler_idx = i;
+	return 0;
+}
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v6 2/5] mempool: remove rte_ring from rte_mempool struct
  2016-06-01 16:19         ` [PATCH v6 0/5] mempool: add external mempool manager David Hunt
  2016-06-01 16:19           ` [PATCH v6 1/5] mempool: support external handler David Hunt
@ 2016-06-01 16:19           ` David Hunt
  2016-06-01 16:19           ` [PATCH v6 3/5] mempool: add default external mempool handler David Hunt
                             ` (3 subsequent siblings)
  5 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-06-01 16:19 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, David Hunt

Now that we're moving to an external mempoool handler, which
uses a void *pool as a pointer to the pool data, remove the
unneeded ring pointer from the mempool struct.

Signed-off-by: David Hunt <david.hunt@intel.com>
---
 app/test/test_mempool_perf.c     | 1 -
 lib/librte_mempool/rte_mempool.h | 1 -
 2 files changed, 2 deletions(-)

diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index cdc02a0..091c1df 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
 							   n_get_bulk);
 				if (unlikely(ret < 0)) {
 					rte_mempool_dump(stdout, mp);
-					rte_ring_dump(stdout, mp->ring);
 					/* in this case, objects are lost... */
 					return -1;
 				}
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index b659565..00eb467 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -204,7 +204,6 @@ struct rte_mempool_memhdr {
  */
 struct rte_mempool {
 	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
-	struct rte_ring *ring;           /**< Ring to store objects. */
 	union {
 		void *pool;              /**< Ring or pool to store objects */
 		uint64_t *pool_id;       /**< External mempool identifier */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v6 3/5] mempool: add default external mempool handler
  2016-06-01 16:19         ` [PATCH v6 0/5] mempool: add external mempool manager David Hunt
  2016-06-01 16:19           ` [PATCH v6 1/5] mempool: support external handler David Hunt
  2016-06-01 16:19           ` [PATCH v6 2/5] mempool: remove rte_ring from rte_mempool struct David Hunt
@ 2016-06-01 16:19           ` David Hunt
  2016-06-01 16:19           ` [PATCH v6 4/5] app/test: test " David Hunt
                             ` (2 subsequent siblings)
  5 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-06-01 16:19 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, David Hunt

The first patch in this series added the framework for an external
mempool manager. This patch in the series adds a set of default
handlers based on rte_ring.

v6 changes: split out into a separate patch for easier review.

Signed-off-by: David Hunt <david.hunt@intel.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 lib/librte_mempool/Makefile              |   1 +
 lib/librte_mempool/rte_mempool.h         |   2 +
 lib/librte_mempool/rte_mempool_default.c | 153 +++++++++++++++++++++++++++++++
 3 files changed, 156 insertions(+)
 create mode 100644 lib/librte_mempool/rte_mempool_default.c

diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 793745f..f19366e 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -43,6 +43,7 @@ LIBABIVER := 2
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_handler.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
 # install includes
 SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
 
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 00eb467..17f1e6f 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -410,6 +410,8 @@ extern struct rte_mempool_handler_table rte_mempool_handler_table;
 static inline struct rte_mempool_ops *
 rte_mempool_handler_get(int handler_idx)
 {
+	RTE_VERIFY(handler_idx < RTE_MEMPOOL_MAX_HANDLER_IDX);
+
 	return &rte_mempool_handler_table.handler_ops[handler_idx];
 }
 
diff --git a/lib/librte_mempool/rte_mempool_default.c b/lib/librte_mempool/rte_mempool_default.c
new file mode 100644
index 0000000..a4c0d9d
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_default.c
@@ -0,0 +1,153 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+
+static int
+common_ring_mp_put(void *p, void * const *obj_table, unsigned n)
+{
+	return rte_ring_mp_enqueue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static int
+common_ring_sp_put(void *p, void * const *obj_table, unsigned n)
+{
+	return rte_ring_sp_enqueue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static int
+common_ring_mc_get(void *p, void **obj_table, unsigned n)
+{
+	return rte_ring_mc_dequeue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static int
+common_ring_sc_get(void *p, void **obj_table, unsigned n)
+{
+	return rte_ring_sc_dequeue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static unsigned
+common_ring_get_count(void *p)
+{
+	return rte_ring_count((struct rte_ring *)p);
+}
+
+
+static void *
+common_ring_alloc(struct rte_mempool *mp)
+{
+	int rg_flags = 0, ret;
+	char rg_name[RTE_RING_NAMESIZE];
+	struct rte_ring *r;
+
+	ret = snprintf(rg_name, sizeof(rg_name),
+		RTE_MEMPOOL_MZ_FORMAT, mp->name);
+	if (ret < 0 || ret >= (int)sizeof(rg_name)) {
+		rte_errno = ENAMETOOLONG;
+		return NULL;
+	}
+
+	/* ring flags */
+	if (mp->flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (mp->flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	/* Allocate the ring that will be used to store objects.
+	 * Ring functions will return appropriate errors if we are
+	 * running as a secondary process etc., so no checks made
+	 * in this function for that condition. */
+	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+		mp->socket_id, rg_flags);
+
+	return r;
+}
+
+static void
+common_ring_free(void *p)
+{
+	rte_ring_free((struct rte_ring *)p);
+}
+
+/*
+ * The following 4 declarations of mempool handler ops structs address
+ * the need for the backward compatible handlers for
+ * single/multi producers and single/multi consumers as dictated by the
+ * flags provided to the rte_mempool_create function
+ */
+static const struct rte_mempool_ops handler_mp_mc = {
+	.name = "ring_mp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.put = common_ring_mp_put,
+	.get = common_ring_mc_get,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops handler_sp_sc = {
+	.name = "ring_sp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.put = common_ring_sp_put,
+	.get = common_ring_sc_get,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops handler_mp_sc = {
+	.name = "ring_mp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.put = common_ring_mp_put,
+	.get = common_ring_sc_get,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops handler_sp_mc = {
+	.name = "ring_sp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.put = common_ring_sp_put,
+	.get = common_ring_mc_get,
+	.get_count = common_ring_get_count,
+};
+
+MEMPOOL_REGISTER_HANDLER(handler_mp_mc);
+MEMPOOL_REGISTER_HANDLER(handler_sp_sc);
+MEMPOOL_REGISTER_HANDLER(handler_mp_sc);
+MEMPOOL_REGISTER_HANDLER(handler_sp_mc);
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v6 4/5] app/test: test external mempool handler
  2016-06-01 16:19         ` [PATCH v6 0/5] mempool: add external mempool manager David Hunt
                             ` (2 preceding siblings ...)
  2016-06-01 16:19           ` [PATCH v6 3/5] mempool: add default external mempool handler David Hunt
@ 2016-06-01 16:19           ` David Hunt
  2016-06-01 16:19           ` [PATCH v6 5/5] mbuf: get default mempool handler from configuration David Hunt
  2016-06-02 13:27           ` [PATCH v7 0/5] mempool: add external mempool manager David Hunt
  5 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-06-01 16:19 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, David Hunt

Use a minimal custom mempool external handler and check that it also
passes basic mempool autotests.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
 app/test/test_mempool.c | 114 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 114 insertions(+)

diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index b586249..518461f 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -83,6 +83,97 @@
 static rte_atomic32_t synchro;
 
 /*
+ * Simple example of custom mempool structure. Holds pointers to all the
+ * elements which are simply malloc'd in this example.
+ */
+struct custom_mempool {
+	rte_spinlock_t lock;
+	unsigned count;
+	unsigned size;
+	void *elts[];
+};
+
+/*
+ * Loop through all the element pointers and allocate a chunk of memory, then
+ * insert that memory into the ring.
+ */
+static void *
+custom_mempool_alloc(struct rte_mempool *mp)
+{
+	struct custom_mempool *cm;
+
+	cm = rte_zmalloc("custom_mempool",
+		sizeof(struct custom_mempool) + mp->size * sizeof(void *), 0);
+	if (cm == NULL)
+		return NULL;
+
+	rte_spinlock_init(&cm->lock);
+	cm->count = 0;
+	cm->size = mp->size;
+	return cm;
+}
+
+static void
+custom_mempool_free(void *p)
+{
+	rte_free(p);
+}
+
+static int
+custom_mempool_put(void *p, void * const *obj_table, unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)p;
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (cm->count + n > cm->size) {
+		ret = -ENOBUFS;
+	} else {
+		memcpy(&cm->elts[cm->count], obj_table, sizeof(void *) * n);
+		cm->count += n;
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+
+static int
+custom_mempool_get(void *p, void **obj_table, unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)p;
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (n > cm->count) {
+		ret = -ENOENT;
+	} else {
+		cm->count -= n;
+		memcpy(obj_table, &cm->elts[cm->count], sizeof(void *) * n);
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+static unsigned
+custom_mempool_get_count(void *p)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)p;
+
+	return cm->count;
+}
+
+static struct rte_mempool_ops mempool_handler_custom = {
+	.name = "custom_handler",
+	.alloc = custom_mempool_alloc,
+	.free = custom_mempool_free,
+	.put = custom_mempool_put,
+	.get = custom_mempool_get,
+	.get_count = custom_mempool_get_count,
+};
+
+MEMPOOL_REGISTER_HANDLER(mempool_handler_custom);
+
+/*
  * save the object number in the first 4 bytes of object data. All
  * other bytes are set to 0.
  */
@@ -477,6 +568,7 @@ test_mempool(void)
 {
 	struct rte_mempool *mp_cache = NULL;
 	struct rte_mempool *mp_nocache = NULL;
+	struct rte_mempool *mp_ext = NULL;
 
 	rte_atomic32_init(&synchro);
 
@@ -505,6 +597,27 @@ test_mempool(void)
 		goto err;
 	}
 
+	/* create a mempool with an external handler */
+	mp_ext = rte_mempool_create_empty("test_ext",
+		MEMPOOL_SIZE,
+		MEMPOOL_ELT_SIZE,
+		RTE_MEMPOOL_CACHE_MAX_SIZE, 0,
+		SOCKET_ID_ANY, 0);
+
+	if (mp_ext == NULL) {
+		printf("cannot allocate mp_ext mempool\n");
+		goto err;
+	}
+	if (rte_mempool_set_handler(mp_ext, "custom_handler") < 0) {
+		printf("cannot set custom handler\n");
+		goto err;
+	}
+	if (rte_mempool_populate_default(mp_ext) < 0) {
+		printf("cannot populate mp_ext mempool\n");
+		goto err;
+	}
+	rte_mempool_obj_iter(mp_ext, my_obj_init, NULL);
+
 	/* retrieve the mempool from its name */
 	if (rte_mempool_lookup("test_nocache") != mp_nocache) {
 		printf("Cannot lookup mempool from its name\n");
@@ -545,6 +658,7 @@ test_mempool(void)
 err:
 	rte_mempool_free(mp_nocache);
 	rte_mempool_free(mp_cache);
+	rte_mempool_free(mp_ext);
 	return -1;
 }
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v6 5/5] mbuf: get default mempool handler from configuration
  2016-06-01 16:19         ` [PATCH v6 0/5] mempool: add external mempool manager David Hunt
                             ` (3 preceding siblings ...)
  2016-06-01 16:19           ` [PATCH v6 4/5] app/test: test " David Hunt
@ 2016-06-01 16:19           ` David Hunt
  2016-06-02 13:27           ` [PATCH v7 0/5] mempool: add external mempool manager David Hunt
  5 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-06-01 16:19 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, David Hunt

By default, the mempool handler used for mbuf allocations is a multi
producer and multi consumer ring. We could imagine a target (maybe some
network processors?) that provides an hardware-assisted pool
mechanism. In this case, the default configuration for this architecture
would contain a different value for RTE_MBUF_DEFAULT_MEMPOOL_HANDLER.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
 config/common_base         |  1 +
 lib/librte_mbuf/rte_mbuf.c | 26 ++++++++++++++++++++++----
 2 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/config/common_base b/config/common_base
index 47c26f6..cd04f54 100644
--- a/config/common_base
+++ b/config/common_base
@@ -394,6 +394,7 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
 #
 CONFIG_RTE_LIBRTE_MBUF=y
 CONFIG_RTE_LIBRTE_MBUF_DEBUG=n
+CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_HANDLER="ring_mp_mc"
 CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index eec1456..7d855f0 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -153,6 +153,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	unsigned cache_size, uint16_t priv_size, uint16_t data_room_size,
 	int socket_id)
 {
+	struct rte_mempool *mp;
 	struct rte_pktmbuf_pool_private mbp_priv;
 	unsigned elt_size;
 
@@ -167,10 +168,27 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	mbp_priv.mbuf_data_room_size = data_room_size;
 	mbp_priv.mbuf_priv_size = priv_size;
 
-	return rte_mempool_create(name, n, elt_size,
-		cache_size, sizeof(struct rte_pktmbuf_pool_private),
-		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
-		socket_id, 0);
+	mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
+		 sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
+	if (mp == NULL)
+		return NULL;
+
+	rte_errno = rte_mempool_set_handler(mp,
+			RTE_MBUF_DEFAULT_MEMPOOL_HANDLER);
+	if (rte_errno != 0) {
+		RTE_LOG(ERR, MBUF, "error setting mempool handler\n");
+		return NULL;
+	}
+	rte_pktmbuf_pool_init(mp, &mbp_priv);
+
+	if (rte_mempool_populate_default(mp) < 0) {
+		rte_mempool_free(mp);
+		return NULL;
+	}
+
+	rte_mempool_obj_iter(mp, rte_pktmbuf_init, NULL);
+
+	return mp;
 }
 
 /* do some sanity checks on a mbuf: panic if it fails */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* Re: [PATCH v6 1/5] mempool: support external handler
  2016-06-01 16:19           ` [PATCH v6 1/5] mempool: support external handler David Hunt
@ 2016-06-01 16:29             ` Hunt, David
  2016-06-01 17:54             ` Jan Viktorin
  1 sibling, 0 replies; 238+ messages in thread
From: Hunt, David @ 2016-06-01 16:29 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob



On 6/1/2016 5:19 PM, David Hunt wrote:
> Until now, the objects stored in a mempool were internally stored in a
> ring. This patch introduces the possibility to register external handlers
> replacing the ring.
>
> The default behavior remains unchanged, but calling the new function
> rte_mempool_set_handler() right after rte_mempool_create_empty() allows
> the user to change the handler that will be used when populating
> the mempool.
>
> v7 changes:
>    * Moved the flags handling from rte_mempool_create_empty to
>      rte_mempool_create, as it's only there for backward compatibility
>    * Various comment additions and cleanup
>    * Renamed rte_mempool_handler to rte_mempool_ops
>    * Added a union for *pool and u64 pool_id in struct rte_mempool

These v7 changes should me merged with the v6 changes below as this is a 
v6 patch.
Or removed altogether, as they are in the cover letter.

> v6 changes:
>    * split the original patch into a few parts for easier review.
>    * rename functions with _ext_ to _ops_.
>    * addressed some review comments
>    * renamed put and get functions to enqueue and dequeue
>    * renamed rte_mempool_handler struct to rte_mempool_handler_ops
>    * changed occurences of rte_mempool_handler_ops to const, as they
>      contain function pointers (security)
>    * added some extra comments
>
>

[...]

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v6 1/5] mempool: support external handler
  2016-06-01 16:19           ` [PATCH v6 1/5] mempool: support external handler David Hunt
  2016-06-01 16:29             ` Hunt, David
@ 2016-06-01 17:54             ` Jan Viktorin
  2016-06-02  9:11               ` Hunt, David
  2016-06-02 11:23               ` Hunt, David
  1 sibling, 2 replies; 238+ messages in thread
From: Jan Viktorin @ 2016-06-01 17:54 UTC (permalink / raw)
  To: David Hunt; +Cc: dev, olivier.matz, jerin.jacob

Hello David,

the rename s/handler/ops/ has a lot of residues. Sorry for that :). I tried to
mark most of them. Otherwise, I couldn't see many more serious issues for now.

Just, note the s/pool/priv/ rename suggestion.

On Wed,  1 Jun 2016 17:19:54 +0100
David Hunt <david.hunt@intel.com> wrote:

> Until now, the objects stored in a mempool were internally stored in a
> ring. This patch introduces the possibility to register external handlers
> replacing the ring.
> 
> The default behavior remains unchanged, but calling the new function
> rte_mempool_set_handler() right after rte_mempool_create_empty() allows
> the user to change the handler that will be used when populating
> the mempool.
> 
> v7 changes:
>   * Moved the flags handling from rte_mempool_create_empty to
>     rte_mempool_create, as it's only there for backward compatibility
>   * Various comment additions and cleanup
>   * Renamed rte_mempool_handler to rte_mempool_ops
>   * Added a union for *pool and u64 pool_id in struct rte_mempool
> 
> v6 changes:
>   * split the original patch into a few parts for easier review.
>   * rename functions with _ext_ to _ops_.
>   * addressed some review comments
>   * renamed put and get functions to enqueue and dequeue
>   * renamed rte_mempool_handler struct to rte_mempool_handler_ops
>   * changed occurences of rte_mempool_handler_ops to const, as they
>     contain function pointers (security)
>   * added some extra comments
> 
> v5 changes: rebasing on top of 35 patch set mempool work.
> 
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: David Hunt <david.hunt@intel.com>
> ---
>  lib/librte_mempool/Makefile              |   1 +
>  lib/librte_mempool/rte_mempool.c         |  71 ++++------
>  lib/librte_mempool/rte_mempool.h         | 235 ++++++++++++++++++++++++++++---
>  lib/librte_mempool/rte_mempool_handler.c | 141 +++++++++++++++++++
>  4 files changed, 384 insertions(+), 64 deletions(-)
>  create mode 100644 lib/librte_mempool/rte_mempool_handler.c
> 
> diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
> index 43423e0..793745f 100644
> --- a/lib/librte_mempool/Makefile
> +++ b/lib/librte_mempool/Makefile
> @@ -42,6 +42,7 @@ LIBABIVER := 2
>  
>  # all source are stored in SRCS-y
>  SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
> +SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_handler.c
>  # install includes
>  SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
>  
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index b54de43..9f34d30 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -148,7 +148,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, phys_addr_t physaddr)
>  #endif
>  
>  	/* enqueue in ring */
> -	rte_ring_sp_enqueue(mp->ring, obj);
> +	rte_mempool_ops_enqueue_bulk(mp, &obj, 1);
>  }
>  
>  /* call obj_cb() for each mempool element */
> @@ -303,40 +303,6 @@ rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
>  	return (size_t)paddr_idx << pg_shift;
>  }
>  
> -/* create the internal ring */
> -static int
> -rte_mempool_ring_create(struct rte_mempool *mp)
> -{
> -	int rg_flags = 0, ret;
> -	char rg_name[RTE_RING_NAMESIZE];
> -	struct rte_ring *r;
> -
> -	ret = snprintf(rg_name, sizeof(rg_name),
> -		RTE_MEMPOOL_MZ_FORMAT, mp->name);
> -	if (ret < 0 || ret >= (int)sizeof(rg_name))
> -		return -ENAMETOOLONG;
> -
> -	/* ring flags */
> -	if (mp->flags & MEMPOOL_F_SP_PUT)
> -		rg_flags |= RING_F_SP_ENQ;
> -	if (mp->flags & MEMPOOL_F_SC_GET)
> -		rg_flags |= RING_F_SC_DEQ;
> -
> -	/* Allocate the ring that will be used to store objects.
> -	 * Ring functions will return appropriate errors if we are
> -	 * running as a secondary process etc., so no checks made
> -	 * in this function for that condition.
> -	 */
> -	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
> -		mp->socket_id, rg_flags);
> -	if (r == NULL)
> -		return -rte_errno;
> -
> -	mp->ring = r;
> -	mp->flags |= MEMPOOL_F_RING_CREATED;
> -	return 0;
> -}
> -
>  /* free a memchunk allocated with rte_memzone_reserve() */
>  static void
>  rte_mempool_memchunk_mz_free(__rte_unused struct rte_mempool_memhdr *memhdr,
> @@ -354,7 +320,7 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
>  	void *elt;
>  
>  	while (!STAILQ_EMPTY(&mp->elt_list)) {
> -		rte_ring_sc_dequeue(mp->ring, &elt);
> +		rte_mempool_ops_dequeue_bulk(mp, &elt, 1);
>  		(void)elt;
>  		STAILQ_REMOVE_HEAD(&mp->elt_list, next);
>  		mp->populated_size--;
> @@ -383,13 +349,16 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
>  	unsigned i = 0;
>  	size_t off;
>  	struct rte_mempool_memhdr *memhdr;
> -	int ret;
>  
>  	/* create the internal ring if not already done */
>  	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
> -		ret = rte_mempool_ring_create(mp);
> -		if (ret < 0)
> -			return ret;
> +		rte_errno = 0;
> +		mp->pool = rte_mempool_ops_alloc(mp);
> +		if (mp->pool == NULL) {
> +			if (rte_errno == 0)
> +				return -EINVAL;
> +			return -rte_errno;

Are you sure the rte_errno is a positive value?

> +		}
>  	}
>  
>  	/* mempool is already populated */
> @@ -703,7 +672,7 @@ rte_mempool_free(struct rte_mempool *mp)
>  	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
>  
>  	rte_mempool_free_memchunks(mp);
> -	rte_ring_free(mp->ring);
> +	rte_mempool_ops_free(mp);
>  	rte_memzone_free(mp->mz);
>  }
>  
> @@ -815,6 +784,20 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
>  		RTE_PTR_ADD(mp, MEMPOOL_HEADER_SIZE(mp, 0));
>  
>  	te->data = mp;
> +
> +	/*
> +	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
> +	 * set the correct index into the handler table.
> +	 */
> +	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> +		rte_mempool_set_handler(mp, "ring_sp_sc");
> +	else if (flags & MEMPOOL_F_SP_PUT)
> +		rte_mempool_set_handler(mp, "ring_sp_mc");
> +	else if (flags & MEMPOOL_F_SC_GET)
> +		rte_mempool_set_handler(mp, "ring_mp_sc");
> +	else
> +		rte_mempool_set_handler(mp, "ring_mp_mc");
> +
>  	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
>  	TAILQ_INSERT_TAIL(mempool_list, te, next);
>  	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
> @@ -930,7 +913,7 @@ rte_mempool_count(const struct rte_mempool *mp)
>  	unsigned count;
>  	unsigned lcore_id;
>  
> -	count = rte_ring_count(mp->ring);
> +	count = rte_mempool_ops_get_count(mp);
>  
>  	if (mp->cache_size == 0)
>  		return count;
> @@ -1123,7 +1106,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
>  
>  	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
>  	fprintf(f, "  flags=%x\n", mp->flags);
> -	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
> +	fprintf(f, "  pool=%p\n", mp->pool);
>  	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->mz->phys_addr);
>  	fprintf(f, "  nb_mem_chunks=%u\n", mp->nb_mem_chunks);
>  	fprintf(f, "  size=%"PRIu32"\n", mp->size);
> @@ -1144,7 +1127,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
>  	}
>  
>  	cache_count = rte_mempool_dump_cache(f, mp);
> -	common_count = rte_ring_count(mp->ring);
> +	common_count = rte_mempool_ops_get_count(mp);
>  	if ((cache_count + common_count) > mp->size)
>  		common_count = mp->size - cache_count;
>  	fprintf(f, "  common_pool_count=%u\n", common_count);
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index 60339bd..b659565 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -67,6 +67,7 @@
>  #include <inttypes.h>
>  #include <sys/queue.h>
>  
> +#include <rte_spinlock.h>
>  #include <rte_log.h>
>  #include <rte_debug.h>
>  #include <rte_lcore.h>
> @@ -204,9 +205,13 @@ struct rte_mempool_memhdr {
>  struct rte_mempool {
>  	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
>  	struct rte_ring *ring;           /**< Ring to store objects. */
> +	union {
> +		void *pool;              /**< Ring or pool to store objects */

What about calling this pdata or priv? I think, it can improve some doc comments.
Describing a "pool" may refer to both the rte_mempool itself or to the mp->pool
pointer. The "priv" would help to understand the code a bit.

Then the rte_mempool_alloc_t can be called rte_mempool_priv_alloc_t. Or something
similar. It's than clear enough, what the function should do...

> +		uint64_t *pool_id;       /**< External mempool identifier */

Why is pool_id a pointer? Is it a typo? I've understood it should be 64b wide
from the discussion (Olivier, Jerin, David):

| >>> something like below,
| >>> union {
| >>> 	void *pool;
| >>> 	uint64_t val;
| >>> 	uint8_t extra_mem[16] // available free bytes in fast path cache line
| >>>
| >>> }  
| >>
| >> Something for the future, perhaps? Will the 8-bits in the flags suffice for
| >> now?  
| > 
| > OK. But simple anonymous union for same type should be OK add now? Not
| > much change I believe, If its difficult then postpone it
| > 
| > union {
| > 	void *pool;
| > 	uint64_t val;
| > }  
|
| I'm ok with the simple union with (void *) and (uint64_t).
| Maybe "val" should be replaced by something more appropriate.
| Is "pool_id" a better name?

> +	};
>  	const struct rte_memzone *mz;    /**< Memzone where pool is allocated */
>  	int flags;                       /**< Flags of the mempool. */
> -	int socket_id;                   /**< Socket id passed at mempool creation. */
> +	int socket_id;                   /**< Socket id passed at create */
>  	uint32_t size;                   /**< Max size of the mempool. */
>  	uint32_t cache_size;             /**< Size of per-lcore local cache. */
>  	uint32_t cache_flushthresh;
> @@ -217,6 +222,14 @@ struct rte_mempool {
>  	uint32_t trailer_size;           /**< Size of trailer (after elt). */
>  
>  	unsigned private_data_size;      /**< Size of private data. */
> +	/**
> +	 * Index into rte_mempool_handler_table array of mempool handler ops

s/rte_mempool_handler_table/rte_mempool_ops_table/

> +	 * structs, which contain callback function pointers.
> +	 * We're using an index here rather than pointers to the callbacks
> +	 * to facilitate any secondary processes that may want to use
> +	 * this mempool.
> +	 */
> +	int32_t handler_idx;

s/handler_idx/ops_idx/

What about ops_index? Not a big deal...

>  
>  	struct rte_mempool_cache *local_cache; /**< Per-lcore local cache */
>  
> @@ -325,6 +338,199 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
>  #define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
>  #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
>  
> +#define RTE_MEMPOOL_HANDLER_NAMESIZE 32 /**< Max length of handler name. */
> +
> +/**
> + * typedef for allocating the external pool.

What about:

function prototype to provide an implementation specific data

> + * The handler's alloc function does whatever it needs to do to grab memory
> + * for this handler, and sets the *pool opaque pointer in the rte_mempool
> + * struct. In the default handler, *pool points to a ring,in other handlers,

What about:

The function should provide a memory heap representation or another private data
used for allocation by the rte_mempool_ops. E.g. the default ops provides an
instance of the rte_ring for this purpose.

> + * it will mostlikely point to a different type of data structure.
> + * It will be transparent to the application programmer.

I'd add something like this:

The function should not touch the given *mp instance.

...because it's goal is NOT to set the mp->pool, this is done by the
rte_mempool_populate_phys - the caller of this rte_mempool_ops_alloc.

This is why I've suggested to pass the rte_mempool as const in the v5.
Is there any reason to modify the rte_mempool contents by the implementation?
I think, we just need to read the flags, socket_id, etc.

> + */
> +typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp);
> +
> +/** Free the external pool opaque data (that pointed to by *pool) */

What about:

/** Free the opaque private data stored in the mp->pool pointer. */

> +typedef void (*rte_mempool_free_t)(void *p);
> +
> +/**
> + * Put an object in the external pool.
> + * The *p pointer is the opaque data for a given mempool handler (ring,
> + * array, linked list, etc)

The obj_table is not documented. Is it really a table? I'd called an array instead.

> + */
> +typedef int (*rte_mempool_put_t)(void *p, void * const *obj_table, unsigned n);

unsigned int

> +
> +/** Get an object from the external pool. */
> +typedef int (*rte_mempool_get_t)(void *p, void **obj_table, unsigned n);

unsigned int

> +
> +/** Return the number of available objects in the external pool. */

Is the number of available objects or the total number of all objects
(so probably a constant value)?

> +typedef unsigned (*rte_mempool_get_count)(void *p);

Should it be const void *p?

> +
> +/** Structure defining a mempool handler. */

What about:

/** Structure defining mempool operations. */

> +struct rte_mempool_ops {
> +	char name[RTE_MEMPOOL_HANDLER_NAMESIZE]; /**< Name of mempool handler */

s/RTE_MEMPOOL_HANDLER_NAMESIZE/RTE_MEMPOOL_OPS_NAMESIZE/

> +	rte_mempool_alloc_t alloc;       /**< Allocate the external pool. */

What about:

Allocate the private data for this rte_mempool_ops.

> +	rte_mempool_free_t free;         /**< Free the external pool. */
> +	rte_mempool_put_t put;           /**< Put an object. */
> +	rte_mempool_get_t get;           /**< Get an object. */
> +	rte_mempool_get_count get_count; /**< Get the number of available objs. */
> +} __rte_cache_aligned;
> +
> +#define RTE_MEMPOOL_MAX_HANDLER_IDX 16  /**< Max registered handlers */

s/RTE_MEMPOOL_MAX_HANDLER_IDX/RTE_MEMPOOL_MAX_OPS_IDX/

> +
> +/**
> + * Structure storing the table of registered handlers, each of which contain
> + * the function pointers for the mempool handler functions.
> + * Each process has it's own storage for this handler struct aray so that
> + * the mempools can be shared across primary and secondary processes.
> + * The indices used to access the array are valid across processes, whereas
> + * any function pointers stored directly in the mempool struct would not be.
> + * This results in us simply having "handler_idx" in the mempool struct.
> + */
> +struct rte_mempool_handler_table {

s/rte_mempool_handler_table/rte_mempool_ops_table/

> +	rte_spinlock_t sl;     /**< Spinlock for add/delete. */
> +	uint32_t num_handlers; /**< Number of handlers in the table. */

s/num_handlers/num_ops/

> +	/**
> +	 * Storage for all possible handlers.
> +	 */
> +	struct rte_mempool_ops handler_ops[RTE_MEMPOOL_MAX_HANDLER_IDX];

s/handler_ops/ops/

> +} __rte_cache_aligned;
> +
> +/** Array of registered handlers */
> +extern struct rte_mempool_handler_table rte_mempool_handler_table;

s/rte_mempool_handler_table/rte_mempool_ops_table/

> +
> +/**
> + * @internal Get the mempool handler from its index.
> + *
> + * @param handler_idx
> + *   The index of the handler in the handler table. It must be a valid
> + *   index: (0 <= idx < num_handlers).
> + * @return
> + *   The pointer to the handler in the table.
> + */
> +static inline struct rte_mempool_ops *
> +rte_mempool_handler_get(int handler_idx)

rte_mempool_ops_get(int ops_idx)

> +{
> +	return &rte_mempool_handler_table.handler_ops[handler_idx];
> +}
> +
> +/**
> + * @internal wrapper for external mempool manager alloc callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @return
> + *   The opaque pointer to the external pool.
> + */
> +void *
> +rte_mempool_ops_alloc(struct rte_mempool *mp);
> +
> +/**
> + * @internal wrapper for external mempool manager get callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @param obj_table
> + *   Pointer to a table of void * pointers (objects).
> + * @param n
> + *   Number of objects to get.
> + * @return
> + *   - 0: Success; got n objects.
> + *   - <0: Error; code of handler get function.
> + */
> +static inline int
> +rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
> +		void **obj_table, unsigned n)
> +{
> +	struct rte_mempool_ops *handler;

*ops

> +
> +	handler = rte_mempool_handler_get(mp->handler_idx);
> +	return handler->get(mp->pool, obj_table, n);
> +}
> +
> +/**
> + * @internal wrapper for external mempool manager put callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @param obj_table
> + *   Pointer to a table of void * pointers (objects).
> + * @param n
> + *   Number of objects to put.
> + * @return
> + *   - 0: Success; n objects supplied.
> + *   - <0: Error; code of handler put function.
> + */
> +static inline int
> +rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
> +		unsigned n)
> +{
> +	struct rte_mempool_ops *handler;

*ops

> +
> +	handler = rte_mempool_handler_get(mp->handler_idx);
> +	return handler->put(mp->pool, obj_table, n);
> +}
> +
> +/**
> + * @internal wrapper for external mempool manager get_count callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @return
> + *   The number of available objects in the external pool.
> + */
> +unsigned
> +rte_mempool_ops_get_count(const struct rte_mempool *mp);
> +
> +/**
> + * @internal wrapper for external mempool manager free callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + */
> +void
> +rte_mempool_ops_free(struct rte_mempool *mp);
> +
> +/**
> + * Set the handler of a mempool
> + *
> + * This can only be done on a mempool that is not populated, i.e. just after
> + * a call to rte_mempool_create_empty().
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @param name
> + *   Name of the handler.
> + * @return
> + *   - 0: Sucess; the new handler is configured.
> + *   - EINVAL - Invalid handler name provided
> + *   - EEXIST - mempool already has a handler assigned

They are returned as -EINVAL and -EEXIST.

IMHO, using "-" here is counter-intuitive:

 - EINVAL

does it mean a positive with "-" or negative value?

> + */
> +int
> +rte_mempool_set_handler(struct rte_mempool *mp, const char *name);

rte_mempool_set_ops

What about rte_mempool_set_ops_byname? Not a big deal...

> +
> +/**
> + * Register an external pool handler.

Register mempool operations

> + *
> + * @param h
> + *   Pointer to the external pool handler
> + * @return
> + *   - >=0: Sucess; return the index of the handler in the table.
> + *   - EINVAL - missing callbacks while registering handler
> + *   - ENOSPC - the maximum number of handlers has been reached

- -EINVAL
- -ENOSPC

> + */
> +int rte_mempool_handler_register(const struct rte_mempool_ops *h);

rte_mempool_ops_register

> +
> +/**
> + * Macro to statically register an external pool handler.

What about adding:

Note that the rte_mempool_ops_register fails silently here when
more then RTE_MEMPOOL_MAX_OPS_IDX is registered.

> + */
> +#define MEMPOOL_REGISTER_HANDLER(h)	

MEMPOOL_REGISTER_OPS
				\
> +	void mp_hdlr_init_##h(void);					\
> +	void __attribute__((constructor, used)) mp_hdlr_init_##h(void)	\
> +	{								\
> +		rte_mempool_handler_register(&h);			\
> +	}
> +
>  /**
>   * An object callback function for mempool.
>   *
> @@ -774,7 +980,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>  	cache->len += n;
>  
>  	if (cache->len >= flushthresh) {
> -		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
> +		rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache_size],
>  				cache->len - cache_size);
>  		cache->len = cache_size;
>  	}
> @@ -785,19 +991,10 @@ ring_enqueue:
>  
>  	/* push remaining objects in ring */
>  #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> -	if (is_mp) {
> -		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
> -			rte_panic("cannot put objects in mempool\n");
> -	}
> -	else {
> -		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
> -			rte_panic("cannot put objects in mempool\n");
> -	}
> +	if (rte_mempool_ops_enqueue_bulk(mp, obj_table, n) < 0)
> +		rte_panic("cannot put objects in mempool\n");
>  #else
> -	if (is_mp)
> -		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
> -	else
> -		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
> +	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);
>  #endif
>  }
>  
> @@ -922,7 +1119,7 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
>   */
>  static inline int __attribute__((always_inline))
>  __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
> -		   unsigned n, int is_mc)
> +		   unsigned n, __rte_unused int is_mc)

unsigned int

>  {
>  	int ret;
>  	struct rte_mempool_cache *cache;
> @@ -945,7 +1142,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>  		uint32_t req = n + (cache_size - cache->len);
>  
>  		/* How many do we require i.e. number to fill the cache + the request */
> -		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
> +		ret = rte_mempool_ops_dequeue_bulk(mp,
> +			&cache->objs[cache->len], req);
>  		if (unlikely(ret < 0)) {
>  			/*
>  			 * In the offchance that we are buffer constrained,
> @@ -972,10 +1170,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>  ring_dequeue:
>  
>  	/* get remaining objects from ring */
> -	if (is_mc)
> -		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
> -	else
> -		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
> +	ret = rte_mempool_ops_dequeue_bulk(mp, obj_table, n);
>  
>  	if (ret < 0)
>  		__MEMPOOL_STAT_ADD(mp, get_fail, n);
> diff --git a/lib/librte_mempool/rte_mempool_handler.c b/lib/librte_mempool/rte_mempool_handler.c
> new file mode 100644
> index 0000000..ed85d65
> --- /dev/null
> +++ b/lib/librte_mempool/rte_mempool_handler.c

rte_mempool_ops.c

> @@ -0,0 +1,141 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2016 Intel Corporation. All rights reserved.
> + *   Copyright(c) 2016 6WIND S.A.
> + *   All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#include <stdio.h>
> +#include <string.h>
> +
> +#include <rte_mempool.h>
> +
> +/* indirect jump table to support external memory pools */
> +struct rte_mempool_handler_table rte_mempool_handler_table = {

rte_mempool_ops_table

> +	.sl =  RTE_SPINLOCK_INITIALIZER ,
> +	.num_handlers = 0

num_ops

> +};
> +
> +/* add a new handler in rte_mempool_handler_table, return its index */
> +int
> +rte_mempool_handler_register(const struct rte_mempool_ops *h)
> +{
> +	struct rte_mempool_ops *handler;

*ops

> +	int16_t handler_idx;

ops_idx

> +
> +	rte_spinlock_lock(&rte_mempool_handler_table.sl);
> +
> +	if (rte_mempool_handler_table.num_handlers >=
> +			RTE_MEMPOOL_MAX_HANDLER_IDX) {
> +		rte_spinlock_unlock(&rte_mempool_handler_table.sl);
> +		RTE_LOG(ERR, MEMPOOL,
> +			"Maximum number of mempool handlers exceeded\n");
> +		return -ENOSPC;
> +	}
> +
> +	if (h->put == NULL || h->get == NULL || h->get_count == NULL) {
> +		rte_spinlock_unlock(&rte_mempool_handler_table.sl);
> +		RTE_LOG(ERR, MEMPOOL,
> +			"Missing callback while registering mempool handler\n");
> +		return -EINVAL;
> +	}
> +
> +	handler_idx = rte_mempool_handler_table.num_handlers++;
> +	handler = &rte_mempool_handler_table.handler_ops[handler_idx];
> +	snprintf(handler->name, sizeof(handler->name), "%s", h->name);
> +	handler->alloc = h->alloc;
> +	handler->put = h->put;
> +	handler->get = h->get;
> +	handler->get_count = h->get_count;
> +
> +	rte_spinlock_unlock(&rte_mempool_handler_table.sl);
> +
> +	return handler_idx;
> +}
> +
> +/* wrapper to allocate an external pool handler */
> +void *
> +rte_mempool_ops_alloc(struct rte_mempool *mp)
> +{
> +	struct rte_mempool_ops *handler;

*ops

> +
> +	handler = rte_mempool_handler_get(mp->handler_idx);
> +	if (handler->alloc == NULL)
> +		return NULL;
> +	return handler->alloc(mp);
> +}
> +
> +/* wrapper to free an external pool handler */
> +void
> +rte_mempool_ops_free(struct rte_mempool *mp)
> +{
> +	struct rte_mempool_ops *handler;

*ops

> +
> +	handler = rte_mempool_handler_get(mp->handler_idx);
> +	if (handler->free == NULL)
> +		return;
> +	return handler->free(mp);
> +}
> +
> +/* wrapper to get available objects in an external pool handler */
> +unsigned int
> +rte_mempool_ops_get_count(const struct rte_mempool *mp)
> +{
> +	struct rte_mempool_ops *handler;

*ops

> +
> +	handler = rte_mempool_handler_get(mp->handler_idx);
> +	return handler->get_count(mp->pool);
> +}
> +
> +/* sets a handler previously registered by rte_mempool_handler_register */
> +int
> +rte_mempool_set_handler(struct rte_mempool *mp, const char *name)
> +{
> +	struct rte_mempool_ops *handler = NULL;

*ops

> +	unsigned i;
> +
> +	/* too late, the mempool is already populated */
> +	if (mp->flags & MEMPOOL_F_RING_CREATED)
> +		return -EEXIST;
> +
> +	for (i = 0; i < rte_mempool_handler_table.num_handlers; i++) {
> +		if (!strcmp(name,
> +				rte_mempool_handler_table.handler_ops[i].name)) {
> +			handler = &rte_mempool_handler_table.handler_ops[i];
> +			break;
> +		}
> +	}
> +
> +	if (handler == NULL)
> +		return -EINVAL;
> +
> +	mp->handler_idx = i;
> +	return 0;
> +}

Regards
Jan

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v6 1/5] mempool: support external handler
  2016-06-01 17:54             ` Jan Viktorin
@ 2016-06-02  9:11               ` Hunt, David
  2016-06-02 11:23               ` Hunt, David
  1 sibling, 0 replies; 238+ messages in thread
From: Hunt, David @ 2016-06-02  9:11 UTC (permalink / raw)
  To: Jan Viktorin; +Cc: dev, olivier.matz, jerin.jacob



On 6/1/2016 6:54 PM, Jan Viktorin wrote:
> Hello David,
>
> the rename s/handler/ops/ has a lot of residues. Sorry for that :). I tried to
> mark most of them. Otherwise, I couldn't see many more serious issues for now.

Ah, I had assumed that we were just talking about the 
rte_mempool_handler_ops structure,
not a global replace of 'handler' with 'ops'.  It does make sense to 
change it to ops, so we don't have
two words describing the same entity. I'll change to ops.

Just, note the s/pool/priv/ rename suggestion.


I prefer your suggestion of pdata rather than priv, how about "pool_data"?


Again, thanks for the comprehensive review.

Regards,
Dave.

[...]

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v6 1/5] mempool: support external handler
  2016-06-01 17:54             ` Jan Viktorin
  2016-06-02  9:11               ` Hunt, David
@ 2016-06-02 11:23               ` Hunt, David
  2016-06-02 13:43                 ` Jan Viktorin
  1 sibling, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-06-02 11:23 UTC (permalink / raw)
  To: Jan Viktorin; +Cc: dev, olivier.matz, jerin.jacob



On 6/1/2016 6:54 PM, Jan Viktorin wrote:
>
>   		mp->populated_size--;
> @@ -383,13 +349,16 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
>   	unsigned i = 0;
>   	size_t off;
>   	struct rte_mempool_memhdr *memhdr;
> -	int ret;
>   
>   	/* create the internal ring if not already done */
>   	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
> -		ret = rte_mempool_ring_create(mp);
> -		if (ret < 0)
> -			return ret;
> +		rte_errno = 0;
> +		mp->pool = rte_mempool_ops_alloc(mp);
> +		if (mp->pool == NULL) {
> +			if (rte_errno == 0)
> +				return -EINVAL;
> +			return -rte_errno;
> Are you sure the rte_errno is a positive value?

If the user returns EINVAL, or similar, we want to return negative, as 
per the rest of DPDK.

>> @@ -204,9 +205,13 @@ struct rte_mempool_memhdr {
>>   struct rte_mempool {
>>   	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
>>   	struct rte_ring *ring;           /**< Ring to store objects. */
>> +	union {
>> +		void *pool;              /**< Ring or pool to store objects */
> What about calling this pdata or priv? I think, it can improve some doc comments.
> Describing a "pool" may refer to both the rte_mempool itself or to the mp->pool
> pointer. The "priv" would help to understand the code a bit.
>
> Then the rte_mempool_alloc_t can be called rte_mempool_priv_alloc_t. Or something
> similar. It's than clear enough, what the function should do...

I'd lean towards pdata or maybe pool_data. Not sure about the function 
name change though,
the function does return a pool_data pointer, which we put into 
mp->pool_data.

>> +		uint64_t *pool_id;       /**< External mempool identifier */
> Why is pool_id a pointer? Is it a typo? I've understood it should be 64b wide
> from the discussion (Olivier, Jerin, David):

Yes, typo.

>   	uint32_t trailer_size;           /**< Size of trailer (after elt). */
>   
>   	unsigned private_data_size;      /**< Size of private data. */
> +	/**
> +	 * Index into rte_mempool_handler_table array of mempool handler ops
> s/rte_mempool_handler_table/rte_mempool_ops_table/

Done.

>> +	 * structs, which contain callback function pointers.
>> +	 * We're using an index here rather than pointers to the callbacks
>> +	 * to facilitate any secondary processes that may want to use
>> +	 * this mempool.
>> +	 */
>> +	int32_t handler_idx;
> s/handler_idx/ops_idx/
>
> What about ops_index? Not a big deal...

I agree ops_index is better. Changed.

>>   
>>   	struct rte_mempool_cache *local_cache; /**< Per-lcore local cache */
>>   
>> @@ -325,6 +338,199 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
>>   #define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
>>   #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
>>   
>> +#define RTE_MEMPOOL_HANDLER_NAMESIZE 32 /**< Max length of handler name. */
>> +
>> +/**
>> + * typedef for allocating the external pool.
> What about:
>
> function prototype to provide an implementation specific data
>
>> + * The handler's alloc function does whatever it needs to do to grab memory
>> + * for this handler, and sets the *pool opaque pointer in the rte_mempool
>> + * struct. In the default handler, *pool points to a ring,in other handlers,
> What about:
>
> The function should provide a memory heap representation or another private data
> used for allocation by the rte_mempool_ops. E.g. the default ops provides an
> instance of the rte_ring for this purpose.
>
>> + * it will mostlikely point to a different type of data structure.
>> + * It will be transparent to the application programmer.
> I'd add something like this:
>
> The function should not touch the given *mp instance.

Agreed. Reworked somewhat.


> ...because it's goal is NOT to set the mp->pool, this is done by the
> rte_mempool_populate_phys - the caller of this rte_mempool_ops_alloc.
>
> This is why I've suggested to pass the rte_mempool as const in the v5.
> Is there any reason to modify the rte_mempool contents by the implementation?
> I think, we just need to read the flags, socket_id, etc.

Yes, I agree it should be const. Changed.

>> + */
>> +typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp);
>> +
>> +/** Free the external pool opaque data (that pointed to by *pool) */
> What about:
>
> /** Free the opaque private data stored in the mp->pool pointer. */

I've merged the two versions of the comment:
/** Free the opaque private data pointed to by mp->pool_data pointer */


>> +typedef void (*rte_mempool_free_t)(void *p);
>> +
>> +/**
>> + * Put an object in the external pool.
>> + * The *p pointer is the opaque data for a given mempool handler (ring,
>> + * array, linked list, etc)
> The obj_table is not documented. Is it really a table? I'd called an array instead.

You're probably right, but it's always been called obj_table, and I'm 
not sure
this patchset is the place to change it.  Maybe a follow up patch?

>> + */
>> +typedef int (*rte_mempool_put_t)(void *p, void * const *obj_table, unsigned n);
> unsigned int
Done
>
>> +
>> +/** Get an object from the external pool. */
>> +typedef int (*rte_mempool_get_t)(void *p, void **obj_table, unsigned n);
> unsigned int
Done
>
>> +
>> +/** Return the number of available objects in the external pool. */
> Is the number of available objects or the total number of all objects
> (so probably a constant value)?

It's intended to be the umber of available objects

>
>> +typedef unsigned (*rte_mempool_get_count)(void *p);
> Should it be const void *p?

Yes, it could be. Changed.

>
>> +
>> +/** Structure defining a mempool handler. */
> What about:
>
> /** Structure defining mempool operations. */

Changed.

>> +struct rte_mempool_ops {
>> +	char name[RTE_MEMPOOL_HANDLER_NAMESIZE]; /**< Name of mempool handler */
> s/RTE_MEMPOOL_HANDLER_NAMESIZE/RTE_MEMPOOL_OPS_NAMESIZE/

Done

>> +	rte_mempool_alloc_t alloc;       /**< Allocate the external pool. */
> What about:
>
> Allocate the private data for this rte_mempool_ops.

Changed to /**< Allocate private data */


>> +	rte_mempool_free_t free;         /**< Free the external pool. */
>> +	rte_mempool_put_t put;           /**< Put an object. */
>> +	rte_mempool_get_t get;           /**< Get an object. */
>> +	rte_mempool_get_count get_count; /**< Get the number of available objs. */
>> +} __rte_cache_aligned;
>> +
>> +#define RTE_MEMPOOL_MAX_HANDLER_IDX 16  /**< Max registered handlers */
> s/RTE_MEMPOOL_MAX_HANDLER_IDX/RTE_MEMPOOL_MAX_OPS_IDX/

Changed

>> +
>> +/**
>> + * Structure storing the table of registered handlers, each of which contain
>> + * the function pointers for the mempool handler functions.
>> + * Each process has it's own storage for this handler struct aray so that
>> + * the mempools can be shared across primary and secondary processes.
>> + * The indices used to access the array are valid across processes, whereas
>> + * any function pointers stored directly in the mempool struct would not be.
>> + * This results in us simply having "handler_idx" in the mempool struct.
>> + */
>> +struct rte_mempool_handler_table {
> s/rte_mempool_handler_table/rte_mempool_ops_table/

Done

>> +	rte_spinlock_t sl;     /**< Spinlock for add/delete. */
>> +	uint32_t num_handlers; /**< Number of handlers in the table. */
> s/num_handlers/num_ops/

Done

>> +	/**
>> +	 * Storage for all possible handlers.
>> +	 */
>> +	struct rte_mempool_ops handler_ops[RTE_MEMPOOL_MAX_HANDLER_IDX];
> s/handler_ops/ops/

Done
>> +} __rte_cache_aligned;
>> +
>> +/** Array of registered handlers */
>> +extern struct rte_mempool_handler_table rte_mempool_handler_table;
> s/rte_mempool_handler_table/rte_mempool_ops_table/

Done

>> +
>> +/**
>> + * @internal Get the mempool handler from its index.
>> + *
>> + * @param handler_idx
>> + *   The index of the handler in the handler table. It must be a valid
>> + *   index: (0 <= idx < num_handlers).
>> + * @return
>> + *   The pointer to the handler in the table.
>> + */
>> +static inline struct rte_mempool_ops *
>> +rte_mempool_handler_get(int handler_idx)
> rte_mempool_ops_get(int ops_idx)

Done


>> +{
>> +	return &rte_mempool_handler_table.handler_ops[handler_idx];
>> +}
>> +
>> +/**
>> + * @internal wrapper for external mempool manager alloc callback.
>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + * @return
>> + *   The opaque pointer to the external pool.
>> + */
>> +void *
>> +rte_mempool_ops_alloc(struct rte_mempool *mp);
>> +
>> +/**
>> + * @internal wrapper for external mempool manager get callback.
>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + * @param obj_table
>> + *   Pointer to a table of void * pointers (objects).
>> + * @param n
>> + *   Number of objects to get.
>> + * @return
>> + *   - 0: Success; got n objects.
>> + *   - <0: Error; code of handler get function.
>> + */
>> +static inline int
>> +rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
>> +		void **obj_table, unsigned n)
>> +{
>> +	struct rte_mempool_ops *handler;
> *ops
Done

>> +
>> +	handler = rte_mempool_handler_get(mp->handler_idx);
>> +	return handler->get(mp->pool, obj_table, n);
>> +}
>> +
>> +/**
>> + * @internal wrapper for external mempool manager put callback.
>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + * @param obj_table
>> + *   Pointer to a table of void * pointers (objects).
>> + * @param n
>> + *   Number of objects to put.
>> + * @return
>> + *   - 0: Success; n objects supplied.
>> + *   - <0: Error; code of handler put function.
>> + */
>> +static inline int
>> +rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
>> +		unsigned n)
>> +{
>> +	struct rte_mempool_ops *handler;
> *ops
Done

>> + * @return
>> + *   - 0: Sucess; the new handler is configured.
>> + *   - EINVAL - Invalid handler name provided
>> + *   - EEXIST - mempool already has a handler assigned
> They are returned as -EINVAL and -EEXIST.
>
> IMHO, using "-" here is counter-intuitive:
>
>   - EINVAL
>
> does it mean a positive with "-" or negative value?

EINVAL is positive, so it's returning negative. Common usage in DPDK, 
afaics.


>> + */
>> +int
>> +rte_mempool_set_handler(struct rte_mempool *mp, const char *name);
> rte_mempool_set_ops
>
> What about rte_mempool_set_ops_byname? Not a big deal...

I agree.  rte_mempool_set_ops_byname

>> +
>> +/**
>> + * Register an external pool handler.
> Register mempool operations

Done

>> + *
>> + * @param h
>> + *   Pointer to the external pool handler
>> + * @return
>> + *   - >=0: Sucess; return the index of the handler in the table.
>> + *   - EINVAL - missing callbacks while registering handler
>> + *   - ENOSPC - the maximum number of handlers has been reached
> - -EINVAL
> - -ENOSPC

:)

>> + */
>> +int rte_mempool_handler_register(const struct rte_mempool_ops *h);
> rte_mempool_ops_register

Done

>> +
>> +/**
>> + * Macro to statically register an external pool handler.
> What about adding:
>
> Note that the rte_mempool_ops_register fails silently here when
> more then RTE_MEMPOOL_MAX_OPS_IDX is registered.

Done


>> + */
>> +#define MEMPOOL_REGISTER_HANDLER(h)	
> MEMPOOL_REGISTER_OPS

Done

> 				\
>> +	void mp_hdlr_init_##h(void);					\
>> +	void __attribute__((constructor, used)) mp_hdlr_init_##h(void)	\
>> +	{								\
>> +		rte_mempool_handler_register(&h);			\
>> +	}
>> +
>>   /**
>>    * An object callback function for mempool.
>>    *
>> @@ -774,7 +980,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>>   	cache->len += n;
>>   
>>   	if (cache->len >= flushthresh) {
>> -		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
>> +		rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache_size],
>>   				cache->len - cache_size);
>>   		cache->len = cache_size;
>>   	}
>> @@ -785,19 +991,10 @@ ring_enqueue:
>>   
>>   	/* push remaining objects in ring */
>>   #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
>> -	if (is_mp) {
>> -		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
>> -			rte_panic("cannot put objects in mempool\n");
>> -	}
>> -	else {
>> -		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
>> -			rte_panic("cannot put objects in mempool\n");
>> -	}
>> +	if (rte_mempool_ops_enqueue_bulk(mp, obj_table, n) < 0)
>> +		rte_panic("cannot put objects in mempool\n");
>>   #else
>> -	if (is_mp)
>> -		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
>> -	else
>> -		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
>> +	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);
>>   #endif
>>   }
>>   
>> @@ -922,7 +1119,7 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
>>    */
>>   static inline int __attribute__((always_inline))
>>   __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>> -		   unsigned n, int is_mc)
>> +		   unsigned n, __rte_unused int is_mc)
> unsigned int

Done

>>   {
>>   	int ret;
>>   	struct rte_mempool_cache *cache;
>> @@ -945,7 +1142,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>>   		uint32_t req = n + (cache_size - cache->len);
>>   
>>   		/* How many do we require i.e. number to fill the cache + the request */
>> -		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
>> +		ret = rte_mempool_ops_dequeue_bulk(mp,
>> +			&cache->objs[cache->len], req);
>>   		if (unlikely(ret < 0)) {
>>   			/*
>>   			 * In the offchance that we are buffer constrained,
>> @@ -972,10 +1170,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>>   ring_dequeue:
>>   
>>   	/* get remaining objects from ring */
>> -	if (is_mc)
>> -		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
>> -	else
>> -		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
>> +	ret = rte_mempool_ops_dequeue_bulk(mp, obj_table, n);
>>   
>>   	if (ret < 0)
>>   		__MEMPOOL_STAT_ADD(mp, get_fail, n);
>> diff --git a/lib/librte_mempool/rte_mempool_handler.c b/lib/librte_mempool/rte_mempool_handler.c
>> new file mode 100644
>> index 0000000..ed85d65
>> --- /dev/null
>> +++ b/lib/librte_mempool/rte_mempool_handler.c
> rte_mempool_ops.c

Done.

>
>> @@ -0,0 +1,141 @@
>> +/*-
>> + *   BSD LICENSE
>> + *
>> + *   Copyright(c) 2016 Intel Corporation. All rights reserved.
>> + *   Copyright(c) 2016 6WIND S.A.
>> + *   All rights reserved.
>> + *
>> + *   Redistribution and use in source and binary forms, with or without
>> + *   modification, are permitted provided that the following conditions
>> + *   are met:
>> + *
>> + *     * Redistributions of source code must retain the above copyright
>> + *       notice, this list of conditions and the following disclaimer.
>> + *     * Redistributions in binary form must reproduce the above copyright
>> + *       notice, this list of conditions and the following disclaimer in
>> + *       the documentation and/or other materials provided with the
>> + *       distribution.
>> + *     * Neither the name of Intel Corporation nor the names of its
>> + *       contributors may be used to endorse or promote products derived
>> + *       from this software without specific prior written permission.
>> + *
>> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
>> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
>> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
>> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
>> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
>> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
>> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
>> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
>> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
>> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
>> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
>> + */
>> +
>> +#include <stdio.h>
>> +#include <string.h>
>> +
>> +#include <rte_mempool.h>
>> +
>> +/* indirect jump table to support external memory pools */
>> +struct rte_mempool_handler_table rte_mempool_handler_table = {
> rte_mempool_ops_table

Done

>> +	.sl =  RTE_SPINLOCK_INITIALIZER ,
>> +	.num_handlers = 0
> num_ops

Done

>> +};
>> +
>> +/* add a new handler in rte_mempool_handler_table, return its index */
>> +int
>> +rte_mempool_handler_register(const struct rte_mempool_ops *h)
>> +{
>> +	struct rte_mempool_ops *handler;
> *ops

Done

>> +	int16_t handler_idx;
> ops_idx

Done

>> +
>> +	rte_spinlock_lock(&rte_mempool_handler_table.sl);
>> +
>> +	if (rte_mempool_handler_table.num_handlers >=
>> +			RTE_MEMPOOL_MAX_HANDLER_IDX) {
>> +		rte_spinlock_unlock(&rte_mempool_handler_table.sl);
>> +		RTE_LOG(ERR, MEMPOOL,
>> +			"Maximum number of mempool handlers exceeded\n");
>> +		return -ENOSPC;
>> +	}
>> +
>> +	if (h->put == NULL || h->get == NULL || h->get_count == NULL) {
>> +		rte_spinlock_unlock(&rte_mempool_handler_table.sl);
>> +		RTE_LOG(ERR, MEMPOOL,
>> +			"Missing callback while registering mempool handler\n");
>> +		return -EINVAL;
>> +	}
>> +
>> +	handler_idx = rte_mempool_handler_table.num_handlers++;
>> +	handler = &rte_mempool_handler_table.handler_ops[handler_idx];
>> +	snprintf(handler->name, sizeof(handler->name), "%s", h->name);
>> +	handler->alloc = h->alloc;
>> +	handler->put = h->put;
>> +	handler->get = h->get;
>> +	handler->get_count = h->get_count;
>> +
>> +	rte_spinlock_unlock(&rte_mempool_handler_table.sl);
>> +
>> +	return handler_idx;
>> +}
>> +
>> +/* wrapper to allocate an external pool handler */
>> +void *
>> +rte_mempool_ops_alloc(struct rte_mempool *mp)
>> +{
>> +	struct rte_mempool_ops *handler;
> *ops


Done

>> +
>> +	handler = rte_mempool_handler_get(mp->handler_idx);
>> +	if (handler->alloc == NULL)
>> +		return NULL;
>> +	return handler->alloc(mp);
>> +}
>> +
>> +/* wrapper to free an external pool handler */
>> +void
>> +rte_mempool_ops_free(struct rte_mempool *mp)
>> +{
>> +	struct rte_mempool_ops *handler;
> *ops

Done

>> +
>> +	handler = rte_mempool_handler_get(mp->handler_idx);
>> +	if (handler->free == NULL)
>> +		return;
>> +	return handler->free(mp);
>> +}
>> +
>> +/* wrapper to get available objects in an external pool handler */
>> +unsigned int
>> +rte_mempool_ops_get_count(const struct rte_mempool *mp)
>> +{
>> +	struct rte_mempool_ops *handler;
> *ops

Done

>> +
>> +	handler = rte_mempool_handler_get(mp->handler_idx);
>> +	return handler->get_count(mp->pool);
>> +}
>> +
>> +/* sets a handler previously registered by rte_mempool_handler_register */
>> +int
>> +rte_mempool_set_handler(struct rte_mempool *mp, const char *name)
>> +{
>> +	struct rte_mempool_ops *handler = NULL;
> *ops

Done

>> +	unsigned i;
>> +
>> +	/* too late, the mempool is already populated */
>> +	if (mp->flags & MEMPOOL_F_RING_CREATED)
>> +		return -EEXIST;
>> +
>> +	for (i = 0; i < rte_mempool_handler_table.num_handlers; i++) {
>> +		if (!strcmp(name,
>> +				rte_mempool_handler_table.handler_ops[i].name)) {
>> +			handler = &rte_mempool_handler_table.handler_ops[i];
>> +			break;
>> +		}
>> +	}
>> +
>> +	if (handler == NULL)
>> +		return -EINVAL;
>> +
>> +	mp->handler_idx = i;
>> +	return 0;
>> +}
> Regards
> Jan


Thanks,
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v7 0/5] mempool: add external mempool manager
  2016-06-01 16:19         ` [PATCH v6 0/5] mempool: add external mempool manager David Hunt
                             ` (4 preceding siblings ...)
  2016-06-01 16:19           ` [PATCH v6 5/5] mbuf: get default mempool handler from configuration David Hunt
@ 2016-06-02 13:27           ` David Hunt
  2016-06-02 13:27             ` [PATCH v7 1/5] mempool: support external mempool operations David Hunt
                               ` (5 more replies)
  5 siblings, 6 replies; 238+ messages in thread
From: David Hunt @ 2016-06-02 13:27 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob

Here's the latest version of the External Mempool Manager patchset.
It's re-based on top of the latest head as of 19/5/2016, including
Olivier's 35-part patch series on mempool re-org [1]

[1] http://dpdk.org/ml/archives/dev/2016-May/039229.html


v7 changes:

 * Changed rte_mempool_handler_table to rte_mempool_ops_table
 * Changed hander_idx to ops_index in rte_mempool struct
 * Reworked comments in rte_mempool.h around ops functions
 * Changed rte_mempool_hander.c to rte_mempool_ops.c
 * Changed all functions containing _handler_ to _ops_
 * Now there is no mention of 'handler' left
 * Other small changes out of review of mailing list

v6 changes:

 * Moved the flags handling from rte_mempool_create_empty to
   rte_mempool_create, as it's only there for backward compatibility
 * Various comment additions and cleanup
 * Renamed rte_mempool_handler to rte_mempool_ops
 * Added a union for *pool and u64 pool_id in struct rte_mempool
 * split the original patch into a few parts for easier review.
 * rename functions with _ext_ to _ops_.
 * addressed review comments
 * renamed put and get functions to enqueue and dequeue
 * changed occurences of rte_mempool_ops to const, as they
   contain function pointers (security)
 * split out the default external mempool handler into a separate
   patch for easier review

v5 changes:
 * rebasing, as it is dependent on another patch series [1]

v4 changes (Olivier Matz):
 * remove the rte_mempool_create_ext() function. To change the handler, the
   user has to do the following:
   - mp = rte_mempool_create_empty()
   - rte_mempool_set_handler(mp, "my_handler")
   - rte_mempool_populate_default(mp)
   This avoids to add another function with more than 10 arguments, duplicating
   the doxygen comments
 * change the api of rte_mempool_alloc_t: only the mempool pointer is required
   as all information is available in it
 * change the api of rte_mempool_free_t: remove return value
 * move inline wrapper functions from the .c to the .h (else they won't be
   inlined). This implies to have one header file (rte_mempool.h), or it
   would have generate cross dependencies issues.
 * remove now unused MEMPOOL_F_INT_HANDLER (note: it was misused anyway due
   to the use of && instead of &)
 * fix build in debug mode (__MEMPOOL_STAT_ADD(mp, put_pool, n) remaining)
 * fix build with shared libraries (global handler has to be declared in
   the .map file)
 * rationalize #include order
 * remove unused function rte_mempool_get_handler_name()
 * rename some structures, fields, functions
 * remove the static in front of rte_tailq_elem rte_mempool_tailq (comment
   from Yuanhan)
 * test the ext mempool handler in the same file than standard mempool tests,
   avoiding to duplicate the code
 * rework the custom handler in mempool_test
 * rework a bit the patch selecting default mbuf pool handler
 * fix some doxygen comments

v3 changes:
 * simplified the file layout, renamed to rte_mempool_handler.[hc]
 * moved the default handlers into rte_mempool_default.c
 * moved the example handler out into app/test/test_ext_mempool.c
 * removed is_mc/is_mp change, slight perf degredation on sp cached operation
 * removed stack hanler, may re-introduce at a later date
 * Changes out of code reviews

v2 changes:
 * There was a lot of duplicate code between rte_mempool_xmem_create and
   rte_mempool_create_ext. This has now been refactored and is now
   hopefully cleaner.
 * The RTE_NEXT_ABI define is now used to allow building of the library
   in a format that is compatible with binaries built against previous
   versions of DPDK.
 * Changes out of code reviews. Hopefully I've got most of them included.

The External Mempool Manager is an extension to the mempool API that allows
users to add and use an external mempool manager, which allows external memory
subsystems such as external hardware memory management systems and software
based memory allocators to be used with DPDK.

The existing API to the internal DPDK mempool manager will remain unchanged
and will be backward compatible. However, there will be an ABI breakage, as
the mempool struct is changing. These changes are all contained withing
RTE_NEXT_ABI defs, and the current or next code can be changed with
the CONFIG_RTE_NEXT_ABI config setting

There are two aspects to external mempool manager.
  1. Adding the code for your new mempool operations (ops). This is
     achieved by adding a new mempool ops source file into the
     librte_mempool library, and using the REGISTER_MEMPOOL_HANDLER macro.
  2. Using the new API to call rte_mempool_create_empty and
     rte_mempool_set_ops to create a new mempool
     using the name parameter to identify which ops to use.

New API calls added
 1. A new rte_mempool_create_empty() function
 2. rte_mempool_set_ops_byname() which sets the mempool's ops (functions)
 3. An rte_mempool_populate_default() and rte_mempool_populate_anon() functions
    which populates the mempool using the relevant ops

Several external mempool managers may be used in the same application. A new
mempool can then be created by using the new 'create' function, providing the
mempool ops struct name to point the mempool to the relevant mempool manager
callback structure.

The old 'create' function can still be called by legacy programs, and will
internally work out the mempool handle based on the flags provided (single
producer, single consumer, etc). By default handles are created internally to
implement the built-in DPDK mempool manager and mempool types.

The external mempool manager needs to provide the following functions.
 1. alloc     - allocates the mempool memory, and adds each object onto a ring
 2. put       - puts an object back into the mempool once an application has
                finished with it
 3. get       - gets an object from the mempool for use by the application
 4. get_count - gets the number of available objects in the mempool
 5. free      - frees the mempool memory

Every time a get/put/get_count is called from the application/PMD, the
callback for that mempool is called. These functions are in the fastpath,
and any unoptimised ops may limit performance.

The new APIs are as follows:

1. rte_mempool_create_empty

struct rte_mempool *
rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
    unsigned cache_size, unsigned private_data_size,
    int socket_id, unsigned flags);

2. rte_mempool_set_ops_byname()

int
rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name);

3. rte_mempool_populate_default()

int rte_mempool_populate_default(struct rte_mempool *mp);

4. rte_mempool_populate_anon()

int rte_mempool_populate_anon(struct rte_mempool *mp);

Please see rte_mempool.h for further information on the parameters.


The important thing to note is that the mempool ops struct is passed by name
to rte_mempool_set_ops_byname, which looks through the ops struct array to
get the ops_index, which is then stored in the rte_memool structure. This
allow multiple processes to use the same mempool, as the function pointers
are accessed via ops index.

The mempool ops structure contains callbacks to the implementation of
the ops function, and is set up for registration as follows:

static const struct rte_mempool_ops ops_sp_mc = {
    .name = "ring_sp_mc",
    .alloc = rte_mempool_common_ring_alloc,
    .put = common_ring_sp_put,
    .get = common_ring_mc_get,
    .get_count = common_ring_get_count,
    .free = common_ring_free,
};

And then the following macro will register the ops in the array of ops
structures

REGISTER_MEMPOOL_OPS(ops_mp_mc);

For an example of API usage, please see app/test/test_mempool.c, which
implements a rudimentary "custom_handler" mempool manager using simple mallocs
for each mempool object. This file also contains the callbacks and self
registration for the new handler.

David Hunt (4):
  mempool: support external mempool operations
  mempool: remove rte_ring from rte_mempool struct
  mempool: add default external mempool ops
  mbuf: allow apps to change default mempool ops

Olivier Matz (1):
  app/test: test external mempool manager

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v7 1/5] mempool: support external mempool operations
  2016-06-02 13:27           ` [PATCH v7 0/5] mempool: add external mempool manager David Hunt
@ 2016-06-02 13:27             ` David Hunt
  2016-06-02 13:38               ` [PATCH v7 0/5] mempool: add external mempool manager Hunt, David
                                 ` (2 more replies)
  2016-06-02 13:27             ` [PATCH v7 2/5] mempool: remove rte_ring from rte_mempool struct David Hunt
                               ` (4 subsequent siblings)
  5 siblings, 3 replies; 238+ messages in thread
From: David Hunt @ 2016-06-02 13:27 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, David Hunt

Until now, the objects stored in a mempool were internally stored in a
ring. This patch introduces the possibility to register external handlers
replacing the ring.

The default behavior remains unchanged, but calling the new function
rte_mempool_set_handler() right after rte_mempool_create_empty() allows
the user to change the handler that will be used when populating
the mempool.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
 lib/librte_mempool/Makefile          |   1 +
 lib/librte_mempool/rte_mempool.c     |  71 ++++-------
 lib/librte_mempool/rte_mempool.h     | 240 ++++++++++++++++++++++++++++++++---
 lib/librte_mempool/rte_mempool_ops.c | 141 ++++++++++++++++++++
 4 files changed, 389 insertions(+), 64 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_ops.c

diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 43423e0..4cbf772 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -42,6 +42,7 @@ LIBABIVER := 2
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops.c
 # install includes
 SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
 
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index b54de43..1c61c57 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -148,7 +148,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, phys_addr_t physaddr)
 #endif
 
 	/* enqueue in ring */
-	rte_ring_sp_enqueue(mp->ring, obj);
+	rte_mempool_ops_enqueue_bulk(mp, &obj, 1);
 }
 
 /* call obj_cb() for each mempool element */
@@ -303,40 +303,6 @@ rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	return (size_t)paddr_idx << pg_shift;
 }
 
-/* create the internal ring */
-static int
-rte_mempool_ring_create(struct rte_mempool *mp)
-{
-	int rg_flags = 0, ret;
-	char rg_name[RTE_RING_NAMESIZE];
-	struct rte_ring *r;
-
-	ret = snprintf(rg_name, sizeof(rg_name),
-		RTE_MEMPOOL_MZ_FORMAT, mp->name);
-	if (ret < 0 || ret >= (int)sizeof(rg_name))
-		return -ENAMETOOLONG;
-
-	/* ring flags */
-	if (mp->flags & MEMPOOL_F_SP_PUT)
-		rg_flags |= RING_F_SP_ENQ;
-	if (mp->flags & MEMPOOL_F_SC_GET)
-		rg_flags |= RING_F_SC_DEQ;
-
-	/* Allocate the ring that will be used to store objects.
-	 * Ring functions will return appropriate errors if we are
-	 * running as a secondary process etc., so no checks made
-	 * in this function for that condition.
-	 */
-	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
-		mp->socket_id, rg_flags);
-	if (r == NULL)
-		return -rte_errno;
-
-	mp->ring = r;
-	mp->flags |= MEMPOOL_F_RING_CREATED;
-	return 0;
-}
-
 /* free a memchunk allocated with rte_memzone_reserve() */
 static void
 rte_mempool_memchunk_mz_free(__rte_unused struct rte_mempool_memhdr *memhdr,
@@ -354,7 +320,7 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
 	void *elt;
 
 	while (!STAILQ_EMPTY(&mp->elt_list)) {
-		rte_ring_sc_dequeue(mp->ring, &elt);
+		rte_mempool_ops_dequeue_bulk(mp, &elt, 1);
 		(void)elt;
 		STAILQ_REMOVE_HEAD(&mp->elt_list, next);
 		mp->populated_size--;
@@ -383,13 +349,16 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	unsigned i = 0;
 	size_t off;
 	struct rte_mempool_memhdr *memhdr;
-	int ret;
 
 	/* create the internal ring if not already done */
 	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
-		ret = rte_mempool_ring_create(mp);
-		if (ret < 0)
-			return ret;
+		rte_errno = 0;
+		mp->pool_data = rte_mempool_ops_alloc(mp);
+		if (mp->pool_data == NULL) {
+			if (rte_errno == 0)
+				return -EINVAL;
+			return -rte_errno;
+		}
 	}
 
 	/* mempool is already populated */
@@ -703,7 +672,7 @@ rte_mempool_free(struct rte_mempool *mp)
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
 
 	rte_mempool_free_memchunks(mp);
-	rte_ring_free(mp->ring);
+	rte_mempool_ops_free(mp);
 	rte_memzone_free(mp->mz);
 }
 
@@ -815,6 +784,20 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 		RTE_PTR_ADD(mp, MEMPOOL_HEADER_SIZE(mp, 0));
 
 	te->data = mp;
+
+	/*
+	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
+	 * set the correct index into the table of ops structs.
+	 */
+	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
+		rte_mempool_set_ops_byname(mp, "ring_sp_sc");
+	else if (flags & MEMPOOL_F_SP_PUT)
+		rte_mempool_set_ops_byname(mp, "ring_sp_mc");
+	else if (flags & MEMPOOL_F_SC_GET)
+		rte_mempool_set_ops_byname(mp, "ring_mp_sc");
+	else
+		rte_mempool_set_ops_byname(mp, "ring_mp_mc");
+
 	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
 	TAILQ_INSERT_TAIL(mempool_list, te, next);
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
@@ -930,7 +913,7 @@ rte_mempool_count(const struct rte_mempool *mp)
 	unsigned count;
 	unsigned lcore_id;
 
-	count = rte_ring_count(mp->ring);
+	count = rte_mempool_ops_get_count(mp);
 
 	if (mp->cache_size == 0)
 		return count;
@@ -1123,7 +1106,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 
 	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
 	fprintf(f, "  flags=%x\n", mp->flags);
-	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
+	fprintf(f, "  pool=%p\n", mp->pool_data);
 	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->mz->phys_addr);
 	fprintf(f, "  nb_mem_chunks=%u\n", mp->nb_mem_chunks);
 	fprintf(f, "  size=%"PRIu32"\n", mp->size);
@@ -1144,7 +1127,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 	}
 
 	cache_count = rte_mempool_dump_cache(f, mp);
-	common_count = rte_ring_count(mp->ring);
+	common_count = rte_mempool_ops_get_count(mp);
 	if ((cache_count + common_count) > mp->size)
 		common_count = mp->size - cache_count;
 	fprintf(f, "  common_pool_count=%u\n", common_count);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 60339bd..a6b28b0 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -67,6 +67,7 @@
 #include <inttypes.h>
 #include <sys/queue.h>
 
+#include <rte_spinlock.h>
 #include <rte_log.h>
 #include <rte_debug.h>
 #include <rte_lcore.h>
@@ -204,9 +205,13 @@ struct rte_mempool_memhdr {
 struct rte_mempool {
 	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
 	struct rte_ring *ring;           /**< Ring to store objects. */
+	union {
+		void *pool_data;         /**< Ring or pool to store objects */
+		uint64_t pool_id;        /**< External mempool identifier */
+	};
 	const struct rte_memzone *mz;    /**< Memzone where pool is allocated */
 	int flags;                       /**< Flags of the mempool. */
-	int socket_id;                   /**< Socket id passed at mempool creation. */
+	int socket_id;                   /**< Socket id passed at create */
 	uint32_t size;                   /**< Max size of the mempool. */
 	uint32_t cache_size;             /**< Size of per-lcore local cache. */
 	uint32_t cache_flushthresh;
@@ -217,6 +222,14 @@ struct rte_mempool {
 	uint32_t trailer_size;           /**< Size of trailer (after elt). */
 
 	unsigned private_data_size;      /**< Size of private data. */
+	/**
+	 * Index into rte_mempool_ops_table array of mempool ops
+	 * structs, which contain callback function pointers.
+	 * We're using an index here rather than pointers to the callbacks
+	 * to facilitate any secondary processes that may want to use
+	 * this mempool.
+	 */
+	int32_t ops_index;
 
 	struct rte_mempool_cache *local_cache; /**< Per-lcore local cache */
 
@@ -325,6 +338,204 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 #define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
 #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
 
+#define RTE_MEMPOOL_OPS_NAMESIZE 32 /**< Max length of ops struct name. */
+
+/**
+ * prototype for implementation specific data provisioning function
+ * The function should provide the implementation specific memory for
+ * for use by the other mempool ops functions in a given mempool ops struct.
+ * E.g. the default ops provides an instance of the rte_ring for this purpose.
+ * it will mostlikely point to a different type of data structure, and
+ * will be transparent to the application programmer.
+ * The function should also not touch the given *mp instance.
+ */
+typedef void *(*rte_mempool_alloc_t)(const struct rte_mempool *mp);
+
+/** Free the opaque private data pointed to by mp->pool_data pointer */
+typedef void (*rte_mempool_free_t)(void *p);
+
+/**
+ * Put an object in the external pool.
+ * The *p pointer is the opaque data for a given mempool manager (ring,
+ * array, linked list, etc)
+ */
+typedef int (*rte_mempool_put_t)(void *p,
+		void * const *obj_table, unsigned int n);
+
+/** Get an object from the external pool. */
+typedef int (*rte_mempool_get_t)(void *p,
+		void **obj_table, unsigned int n);
+
+/** Return the number of available objects in the external pool. */
+typedef unsigned (*rte_mempool_get_count)(void *p);
+
+/** Structure defining mempool operations structure */
+struct rte_mempool_ops {
+	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct */
+	rte_mempool_alloc_t alloc;       /**< Allocate private data */
+	rte_mempool_free_t free;         /**< Free the external pool. */
+	rte_mempool_put_t put;           /**< Put an object. */
+	rte_mempool_get_t get;           /**< Get an object. */
+	rte_mempool_get_count get_count; /**< Get the number of available objs. */
+} __rte_cache_aligned;
+
+#define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
+
+/**
+ * Structure storing the table of registered ops structs, each of which contain
+ * the function pointers for the mempool ops functions.
+ * Each process has it's own storage for this ops struct aray so that
+ * the mempools can be shared across primary and secondary processes.
+ * The indices used to access the array are valid across processes, whereas
+ * any function pointers stored directly in the mempool struct would not be.
+ * This results in us simply having "ops_index" in the mempool struct.
+ */
+struct rte_mempool_ops_table {
+	rte_spinlock_t sl;     /**< Spinlock for add/delete. */
+	uint32_t num_ops;      /**< Number of used ops structs in the table. */
+	/**
+	 * Storage for all possible ops structs.
+	 */
+	struct rte_mempool_ops ops[RTE_MEMPOOL_MAX_OPS_IDX];
+} __rte_cache_aligned;
+
+/** Array of registered ops structs */
+extern struct rte_mempool_ops_table rte_mempool_ops_table;
+
+/**
+ * @internal Get the mempool ops struct from its index.
+ *
+ * @param ops_index
+ *   The index of the ops struct in the ops struct table. It must be a valid
+ *   index: (0 <= idx < num_ops).
+ * @return
+ *   The pointer to the ops struct in the table.
+ */
+static inline struct rte_mempool_ops *
+rte_mempool_ops_get(int ops_index)
+{
+	return &rte_mempool_ops_table.ops[ops_index];
+}
+
+/**
+ * @internal wrapper for external mempool manager alloc callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   The opaque pointer to the external pool.
+ */
+void *
+rte_mempool_ops_alloc(const struct rte_mempool *mp);
+
+/**
+ * @internal wrapper for external mempool manager get callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to get.
+ * @return
+ *   - 0: Success; got n objects.
+ *   - <0: Error; code of get function.
+ */
+static inline int
+rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
+		void **obj_table, unsigned n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->get(mp->pool_data, obj_table, n);
+}
+
+/**
+ * @internal wrapper for external mempool manager put callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to put.
+ * @return
+ *   - 0: Success; n objects supplied.
+ *   - <0: Error; code of put function.
+ */
+static inline int
+rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->put(mp->pool_data, obj_table, n);
+}
+
+/**
+ * @internal wrapper for external mempool manager get_count callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   The number of available objects in the external pool.
+ */
+unsigned
+rte_mempool_ops_get_count(const struct rte_mempool *mp);
+
+/**
+ * @internal wrapper for external mempool manager free callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ */
+void
+rte_mempool_ops_free(struct rte_mempool *mp);
+
+/**
+ * Set the ops of a mempool
+ *
+ * This can only be done on a mempool that is not populated, i.e. just after
+ * a call to rte_mempool_create_empty().
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param name
+ *   Name of the ops structure to use for this mempool.
+ * @return
+ *   - 0: Sucess; the mempool is now using the requested ops functions
+ *   - -EINVAL - Invalid ops struct name provided
+ *   - -EEXIST - mempool already has an ops struct assigned
+ */
+int
+rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name);
+
+/**
+ * Register mempool operations
+ *
+ * @param h
+ *   Pointer to and ops structure to register
+ * @return
+ *   - >=0: Sucess; return the index of the ops struct in the table.
+ *   - -EINVAL - some missing callbacks while registering ops struct
+ *   - -ENOSPC - the maximum number of ops structs has been reached
+ */
+int rte_mempool_ops_register(const struct rte_mempool_ops *h);
+
+/**
+ * Macro to statically register the ops of an external mempool manager
+ * Note that the rte_mempool_ops_register fails silently here when
+ * more then RTE_MEMPOOL_MAX_OPS_IDX is registered.
+ */
+#define MEMPOOL_REGISTER_OPS(h)					\
+	void mp_hdlr_init_##h(void);					\
+	void __attribute__((constructor, used)) mp_hdlr_init_##h(void)	\
+	{								\
+		rte_mempool_ops_register(&h);			\
+	}
+
 /**
  * An object callback function for mempool.
  *
@@ -774,7 +985,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 	cache->len += n;
 
 	if (cache->len >= flushthresh) {
-		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
+		rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache_size],
 				cache->len - cache_size);
 		cache->len = cache_size;
 	}
@@ -785,19 +996,10 @@ ring_enqueue:
 
 	/* push remaining objects in ring */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	if (is_mp) {
-		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
-	else {
-		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
+	if (rte_mempool_ops_enqueue_bulk(mp, obj_table, n) < 0)
+		rte_panic("cannot put objects in mempool\n");
 #else
-	if (is_mp)
-		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
-	else
-		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
+	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);
 #endif
 }
 
@@ -922,7 +1124,7 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
  */
 static inline int __attribute__((always_inline))
 __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
-		   unsigned n, int is_mc)
+		   unsigned int n, int is_mc)
 {
 	int ret;
 	struct rte_mempool_cache *cache;
@@ -945,7 +1147,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 		uint32_t req = n + (cache_size - cache->len);
 
 		/* How many do we require i.e. number to fill the cache + the request */
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
+		ret = rte_mempool_ops_dequeue_bulk(mp,
+			&cache->objs[cache->len], req);
 		if (unlikely(ret < 0)) {
 			/*
 			 * In the offchance that we are buffer constrained,
@@ -972,10 +1175,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 ring_dequeue:
 
 	/* get remaining objects from ring */
-	if (is_mc)
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
-	else
-		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
+	ret = rte_mempool_ops_dequeue_bulk(mp, obj_table, n);
 
 	if (ret < 0)
 		__MEMPOOL_STAT_ADD(mp, get_fail, n);
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
new file mode 100644
index 0000000..ec92a58
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -0,0 +1,141 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2016 6WIND S.A.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_mempool.h>
+
+/* indirect jump table to support external memory pools */
+struct rte_mempool_ops_table rte_mempool_ops_table = {
+	.sl =  RTE_SPINLOCK_INITIALIZER ,
+	.num_ops = 0
+};
+
+/* add a new ops struct in rte_mempool_ops_table, return its index */
+int
+rte_mempool_ops_register(const struct rte_mempool_ops *h)
+{
+	struct rte_mempool_ops *ops;
+	int16_t ops_index;
+
+	rte_spinlock_lock(&rte_mempool_ops_table.sl);
+
+	if (rte_mempool_ops_table.num_ops >=
+			RTE_MEMPOOL_MAX_OPS_IDX) {
+		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Maximum number of mempool ops structs exceeded\n");
+		return -ENOSPC;
+	}
+
+	if (h->put == NULL || h->get == NULL || h->get_count == NULL) {
+		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Missing callback while registering mempool ops\n");
+		return -EINVAL;
+	}
+
+	ops_index = rte_mempool_ops_table.num_ops++;
+	ops = &rte_mempool_ops_table.ops[ops_index];
+	snprintf(ops->name, sizeof(ops->name), "%s", h->name);
+	ops->alloc = h->alloc;
+	ops->put = h->put;
+	ops->get = h->get;
+	ops->get_count = h->get_count;
+
+	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+
+	return ops_index;
+}
+
+/* wrapper to allocate an external mempool's private (pool) data */
+void *
+rte_mempool_ops_alloc(const struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	if (ops->alloc == NULL)
+		return NULL;
+	return ops->alloc(mp);
+}
+
+/* wrapper to free an external pool ops */
+void
+rte_mempool_ops_free(struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	if (ops->free == NULL)
+		return;
+	return ops->free(mp);
+}
+
+/* wrapper to get available objects in an external mempool */
+unsigned int
+rte_mempool_ops_get_count(const struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->get_count(mp->pool_data);
+}
+
+/* sets mempool ops previously registered by rte_mempool_ops_register */
+int
+rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)
+{
+	struct rte_mempool_ops *ops = NULL;
+	unsigned i;
+
+	/* too late, the mempool is already populated */
+	if (mp->flags & MEMPOOL_F_RING_CREATED)
+		return -EEXIST;
+
+	for (i = 0; i < rte_mempool_ops_table.num_ops; i++) {
+		if (!strcmp(name,
+				rte_mempool_ops_table.ops[i].name)) {
+			ops = &rte_mempool_ops_table.ops[i];
+			break;
+		}
+	}
+
+	if (ops == NULL)
+		return -EINVAL;
+
+	mp->ops_index = i;
+	return 0;
+}
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v7 2/5] mempool: remove rte_ring from rte_mempool struct
  2016-06-02 13:27           ` [PATCH v7 0/5] mempool: add external mempool manager David Hunt
  2016-06-02 13:27             ` [PATCH v7 1/5] mempool: support external mempool operations David Hunt
@ 2016-06-02 13:27             ` David Hunt
  2016-06-03 12:28               ` Olivier MATZ
  2016-06-02 13:27             ` [PATCH v7 3/5] mempool: add default external mempool ops David Hunt
                               ` (3 subsequent siblings)
  5 siblings, 1 reply; 238+ messages in thread
From: David Hunt @ 2016-06-02 13:27 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, David Hunt

Now that we're moving to an external mempoool handler, which
uses a void *pool_data as a pointer to the pool data, remove the
unneeded ring pointer from the mempool struct.

Signed-off-by: David Hunt <david.hunt@intel.com>
---
 app/test/test_mempool_perf.c     | 1 -
 lib/librte_mempool/rte_mempool.h | 1 -
 2 files changed, 2 deletions(-)

diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index cdc02a0..091c1df 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
 							   n_get_bulk);
 				if (unlikely(ret < 0)) {
 					rte_mempool_dump(stdout, mp);
-					rte_ring_dump(stdout, mp->ring);
 					/* in this case, objects are lost... */
 					return -1;
 				}
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index a6b28b0..c33eeb8 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -204,7 +204,6 @@ struct rte_mempool_memhdr {
  */
 struct rte_mempool {
 	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
-	struct rte_ring *ring;           /**< Ring to store objects. */
 	union {
 		void *pool_data;         /**< Ring or pool to store objects */
 		uint64_t pool_id;        /**< External mempool identifier */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v7 3/5] mempool: add default external mempool ops
  2016-06-02 13:27           ` [PATCH v7 0/5] mempool: add external mempool manager David Hunt
  2016-06-02 13:27             ` [PATCH v7 1/5] mempool: support external mempool operations David Hunt
  2016-06-02 13:27             ` [PATCH v7 2/5] mempool: remove rte_ring from rte_mempool struct David Hunt
@ 2016-06-02 13:27             ` David Hunt
  2016-06-02 13:27             ` [PATCH v7 4/5] app/test: test external mempool manager David Hunt
                               ` (2 subsequent siblings)
  5 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-06-02 13:27 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, David Hunt

The first patch in this series added the framework for an external
mempool manager. This patch in the series adds a set of default
ops (functioni callbacks) based on rte_ring.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
 lib/librte_mempool/Makefile              |   1 +
 lib/librte_mempool/rte_mempool.h         |   2 +
 lib/librte_mempool/rte_mempool_default.c | 153 +++++++++++++++++++++++++++++++
 3 files changed, 156 insertions(+)
 create mode 100644 lib/librte_mempool/rte_mempool_default.c

diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 4cbf772..8cac29b 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -43,6 +43,7 @@ LIBABIVER := 2
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
 # install includes
 SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
 
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index c33eeb8..b90f57c 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -413,6 +413,8 @@ extern struct rte_mempool_ops_table rte_mempool_ops_table;
 static inline struct rte_mempool_ops *
 rte_mempool_ops_get(int ops_index)
 {
+	RTE_VERIFY(ops_index < RTE_MEMPOOL_MAX_OPS_IDX);
+
 	return &rte_mempool_ops_table.ops[ops_index];
 }
 
diff --git a/lib/librte_mempool/rte_mempool_default.c b/lib/librte_mempool/rte_mempool_default.c
new file mode 100644
index 0000000..f2e7d95
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_default.c
@@ -0,0 +1,153 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+
+static int
+common_ring_mp_put(void *p, void * const *obj_table, unsigned n)
+{
+	return rte_ring_mp_enqueue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static int
+common_ring_sp_put(void *p, void * const *obj_table, unsigned n)
+{
+	return rte_ring_sp_enqueue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static int
+common_ring_mc_get(void *p, void **obj_table, unsigned n)
+{
+	return rte_ring_mc_dequeue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static int
+common_ring_sc_get(void *p, void **obj_table, unsigned n)
+{
+	return rte_ring_sc_dequeue_bulk((struct rte_ring *)p, obj_table, n);
+}
+
+static unsigned
+common_ring_get_count(void *p)
+{
+	return rte_ring_count((struct rte_ring *)p);
+}
+
+
+static void *
+common_ring_alloc(const struct rte_mempool *mp)
+{
+	int rg_flags = 0, ret;
+	char rg_name[RTE_RING_NAMESIZE];
+	struct rte_ring *r;
+
+	ret = snprintf(rg_name, sizeof(rg_name),
+		RTE_MEMPOOL_MZ_FORMAT, mp->name);
+	if (ret < 0 || ret >= (int)sizeof(rg_name)) {
+		rte_errno = ENAMETOOLONG;
+		return NULL;
+	}
+
+	/* ring flags */
+	if (mp->flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (mp->flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	/* Allocate the ring that will be used to store objects.
+	 * Ring functions will return appropriate errors if we are
+	 * running as a secondary process etc., so no checks made
+	 * in this function for that condition. */
+	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+		mp->socket_id, rg_flags);
+
+	return r;
+}
+
+static void
+common_ring_free(void *p)
+{
+	rte_ring_free((struct rte_ring *)p);
+}
+
+/*
+ * The following 4 declarations of mempool ops structs address
+ * the need for the backward compatible mempool managers for
+ * single/multi producers and single/multi consumers as dictated by the
+ * flags provided to the rte_mempool_create function
+ */
+static const struct rte_mempool_ops ops_mp_mc = {
+	.name = "ring_mp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.put = common_ring_mp_put,
+	.get = common_ring_mc_get,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_sp_sc = {
+	.name = "ring_sp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.put = common_ring_sp_put,
+	.get = common_ring_sc_get,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_mp_sc = {
+	.name = "ring_mp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.put = common_ring_mp_put,
+	.get = common_ring_sc_get,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_sp_mc = {
+	.name = "ring_sp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.put = common_ring_sp_put,
+	.get = common_ring_mc_get,
+	.get_count = common_ring_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(ops_mp_mc);
+MEMPOOL_REGISTER_OPS(ops_sp_sc);
+MEMPOOL_REGISTER_OPS(ops_mp_sc);
+MEMPOOL_REGISTER_OPS(ops_sp_mc);
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v7 4/5] app/test: test external mempool manager
  2016-06-02 13:27           ` [PATCH v7 0/5] mempool: add external mempool manager David Hunt
                               ` (2 preceding siblings ...)
  2016-06-02 13:27             ` [PATCH v7 3/5] mempool: add default external mempool ops David Hunt
@ 2016-06-02 13:27             ` David Hunt
  2016-06-02 13:27             ` [PATCH v7 5/5] mbuf: allow apps to change default mempool ops David Hunt
  2016-06-03 14:58             ` [PATCH v8 0/5] mempool: add external mempool manager David Hunt
  5 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-06-02 13:27 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, David Hunt

Use a minimal custom mempool external ops and check that it also
passes basic mempool autotests.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
 app/test/test_mempool.c | 114 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 114 insertions(+)

diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index b586249..52d6f4e 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -83,6 +83,97 @@
 static rte_atomic32_t synchro;
 
 /*
+ * Simple example of custom mempool structure. Holds pointers to all the
+ * elements which are simply malloc'd in this example.
+ */
+struct custom_mempool {
+	rte_spinlock_t lock;
+	unsigned count;
+	unsigned size;
+	void *elts[];
+};
+
+/*
+ * Loop through all the element pointers and allocate a chunk of memory, then
+ * insert that memory into the ring.
+ */
+static void *
+custom_mempool_alloc(const struct rte_mempool *mp)
+{
+	struct custom_mempool *cm;
+
+	cm = rte_zmalloc("custom_mempool",
+		sizeof(struct custom_mempool) + mp->size * sizeof(void *), 0);
+	if (cm == NULL)
+		return NULL;
+
+	rte_spinlock_init(&cm->lock);
+	cm->count = 0;
+	cm->size = mp->size;
+	return cm;
+}
+
+static void
+custom_mempool_free(void *p)
+{
+	rte_free(p);
+}
+
+static int
+custom_mempool_put(void *p, void * const *obj_table, unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)p;
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (cm->count + n > cm->size) {
+		ret = -ENOBUFS;
+	} else {
+		memcpy(&cm->elts[cm->count], obj_table, sizeof(void *) * n);
+		cm->count += n;
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+
+static int
+custom_mempool_get(void *p, void **obj_table, unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)p;
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (n > cm->count) {
+		ret = -ENOENT;
+	} else {
+		cm->count -= n;
+		memcpy(obj_table, &cm->elts[cm->count], sizeof(void *) * n);
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+static unsigned
+custom_mempool_get_count(void *p)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)p;
+
+	return cm->count;
+}
+
+static struct rte_mempool_ops mempool_ops_custom = {
+	.name = "custom_handler",
+	.alloc = custom_mempool_alloc,
+	.free = custom_mempool_free,
+	.put = custom_mempool_put,
+	.get = custom_mempool_get,
+	.get_count = custom_mempool_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(mempool_ops_custom);
+
+/*
  * save the object number in the first 4 bytes of object data. All
  * other bytes are set to 0.
  */
@@ -477,6 +568,7 @@ test_mempool(void)
 {
 	struct rte_mempool *mp_cache = NULL;
 	struct rte_mempool *mp_nocache = NULL;
+	struct rte_mempool *mp_ext = NULL;
 
 	rte_atomic32_init(&synchro);
 
@@ -505,6 +597,27 @@ test_mempool(void)
 		goto err;
 	}
 
+	/* create a mempool with an external handler */
+	mp_ext = rte_mempool_create_empty("test_ext",
+		MEMPOOL_SIZE,
+		MEMPOOL_ELT_SIZE,
+		RTE_MEMPOOL_CACHE_MAX_SIZE, 0,
+		SOCKET_ID_ANY, 0);
+
+	if (mp_ext == NULL) {
+		printf("cannot allocate mp_ext mempool\n");
+		goto err;
+	}
+	if (rte_mempool_set_ops_byname(mp_ext, "custom_handler") < 0) {
+		printf("cannot set custom handler\n");
+		goto err;
+	}
+	if (rte_mempool_populate_default(mp_ext) < 0) {
+		printf("cannot populate mp_ext mempool\n");
+		goto err;
+	}
+	rte_mempool_obj_iter(mp_ext, my_obj_init, NULL);
+
 	/* retrieve the mempool from its name */
 	if (rte_mempool_lookup("test_nocache") != mp_nocache) {
 		printf("Cannot lookup mempool from its name\n");
@@ -545,6 +658,7 @@ test_mempool(void)
 err:
 	rte_mempool_free(mp_nocache);
 	rte_mempool_free(mp_cache);
+	rte_mempool_free(mp_ext);
 	return -1;
 }
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v7 5/5] mbuf: allow apps to change default mempool ops
  2016-06-02 13:27           ` [PATCH v7 0/5] mempool: add external mempool manager David Hunt
                               ` (3 preceding siblings ...)
  2016-06-02 13:27             ` [PATCH v7 4/5] app/test: test external mempool manager David Hunt
@ 2016-06-02 13:27             ` David Hunt
  2016-06-03 12:28               ` Olivier MATZ
  2016-06-03 14:58             ` [PATCH v8 0/5] mempool: add external mempool manager David Hunt
  5 siblings, 1 reply; 238+ messages in thread
From: David Hunt @ 2016-06-02 13:27 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, David Hunt

By default, the mempool ops used for mbuf allocations is a multi
producer and multi consumer ring. We could imagine a target (maybe some
network processors?) that provides an hardware-assisted pool
mechanism. In this case, the default configuration for this architecture
would contain a different value for RTE_MBUF_DEFAULT_MEMPOOL_OPS.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
 config/common_base         |  1 +
 lib/librte_mbuf/rte_mbuf.c | 26 ++++++++++++++++++++++----
 2 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/config/common_base b/config/common_base
index 47c26f6..899c038 100644
--- a/config/common_base
+++ b/config/common_base
@@ -394,6 +394,7 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
 #
 CONFIG_RTE_LIBRTE_MBUF=y
 CONFIG_RTE_LIBRTE_MBUF_DEBUG=n
+CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="ring_mp_mc"
 CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index eec1456..491230c 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -153,6 +153,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	unsigned cache_size, uint16_t priv_size, uint16_t data_room_size,
 	int socket_id)
 {
+	struct rte_mempool *mp;
 	struct rte_pktmbuf_pool_private mbp_priv;
 	unsigned elt_size;
 
@@ -167,10 +168,27 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	mbp_priv.mbuf_data_room_size = data_room_size;
 	mbp_priv.mbuf_priv_size = priv_size;
 
-	return rte_mempool_create(name, n, elt_size,
-		cache_size, sizeof(struct rte_pktmbuf_pool_private),
-		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
-		socket_id, 0);
+	mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
+		 sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
+	if (mp == NULL)
+		return NULL;
+
+	rte_errno = rte_mempool_set_ops_byname(mp,
+			RTE_MBUF_DEFAULT_MEMPOOL_OPS);
+	if (rte_errno != 0) {
+		RTE_LOG(ERR, MBUF, "error setting mempool handler\n");
+		return NULL;
+	}
+	rte_pktmbuf_pool_init(mp, &mbp_priv);
+
+	if (rte_mempool_populate_default(mp) < 0) {
+		rte_mempool_free(mp);
+		return NULL;
+	}
+
+	rte_mempool_obj_iter(mp, rte_pktmbuf_init, NULL);
+
+	return mp;
 }
 
 /* do some sanity checks on a mbuf: panic if it fails */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v7 0/5] mempool: add external mempool manager
  2016-06-02 13:27             ` [PATCH v7 1/5] mempool: support external mempool operations David Hunt
@ 2016-06-02 13:38               ` Hunt, David
  2016-06-03  6:38               ` [PATCH v7 1/5] mempool: support external mempool operations Jerin Jacob
  2016-06-03 12:28               ` Olivier MATZ
  2 siblings, 0 replies; 238+ messages in thread
From: Hunt, David @ 2016-06-02 13:38 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob

Since the cover letter seems to have gone missing, sending it again:

Here's the latest version of the External Mempool Manager patchset.
It's re-based on top of the latest head as of 19/5/2016, including
Olivier's 35-part patch series on mempool re-org [1]

[1] http://dpdk.org/ml/archives/dev/2016-May/039229.html


v7 changes:

  * Changed rte_mempool_handler_table to rte_mempool_ops_table
  * Changed hander_idx to ops_index in rte_mempool struct
  * Reworked comments in rte_mempool.h around ops functions
  * Changed rte_mempool_hander.c to rte_mempool_ops.c
  * Changed all functions containing _handler_ to _ops_
  * Now there is no mention of 'handler' left
  * Other small changes out of review of mailing list

v6 changes:

  * Moved the flags handling from rte_mempool_create_empty to
    rte_mempool_create, as it's only there for backward compatibility
  * Various comment additions and cleanup
  * Renamed rte_mempool_handler to rte_mempool_ops
  * Added a union for *pool and u64 pool_id in struct rte_mempool
  * split the original patch into a few parts for easier review.
  * rename functions with _ext_ to _ops_.
  * addressed review comments
  * renamed put and get functions to enqueue and dequeue
  * changed occurences of rte_mempool_ops to const, as they
    contain function pointers (security)
  * split out the default external mempool handler into a separate
    patch for easier review

v5 changes:
  * rebasing, as it is dependent on another patch series [1]

v4 changes (Olivier Matz):
  * remove the rte_mempool_create_ext() function. To change the handler, the
    user has to do the following:
    - mp = rte_mempool_create_empty()
    - rte_mempool_set_handler(mp, "my_handler")
    - rte_mempool_populate_default(mp)
    This avoids to add another function with more than 10 arguments, 
duplicating
    the doxygen comments
  * change the api of rte_mempool_alloc_t: only the mempool pointer is 
required
    as all information is available in it
  * change the api of rte_mempool_free_t: remove return value
  * move inline wrapper functions from the .c to the .h (else they won't be
    inlined). This implies to have one header file (rte_mempool.h), or it
    would have generate cross dependencies issues.
  * remove now unused MEMPOOL_F_INT_HANDLER (note: it was misused anyway due
    to the use of && instead of &)
  * fix build in debug mode (__MEMPOOL_STAT_ADD(mp, put_pool, n) remaining)
  * fix build with shared libraries (global handler has to be declared in
    the .map file)
  * rationalize #include order
  * remove unused function rte_mempool_get_handler_name()
  * rename some structures, fields, functions
  * remove the static in front of rte_tailq_elem rte_mempool_tailq (comment
    from Yuanhan)
  * test the ext mempool handler in the same file than standard mempool 
tests,
    avoiding to duplicate the code
  * rework the custom handler in mempool_test
  * rework a bit the patch selecting default mbuf pool handler
  * fix some doxygen comments

v3 changes:
  * simplified the file layout, renamed to rte_mempool_handler.[hc]
  * moved the default handlers into rte_mempool_default.c
  * moved the example handler out into app/test/test_ext_mempool.c
  * removed is_mc/is_mp change, slight perf degredation on sp cached 
operation
  * removed stack hanler, may re-introduce at a later date
  * Changes out of code reviews

v2 changes:
  * There was a lot of duplicate code between rte_mempool_xmem_create and
    rte_mempool_create_ext. This has now been refactored and is now
    hopefully cleaner.
  * The RTE_NEXT_ABI define is now used to allow building of the library
    in a format that is compatible with binaries built against previous
    versions of DPDK.
  * Changes out of code reviews. Hopefully I've got most of them included.

The External Mempool Manager is an extension to the mempool API that allows
users to add and use an external mempool manager, which allows external 
memory
subsystems such as external hardware memory management systems and software
based memory allocators to be used with DPDK.

The existing API to the internal DPDK mempool manager will remain unchanged
and will be backward compatible. However, there will be an ABI breakage, as
the mempool struct is changing. These changes are all contained withing
RTE_NEXT_ABI defs, and the current or next code can be changed with
the CONFIG_RTE_NEXT_ABI config setting

There are two aspects to external mempool manager.
   1. Adding the code for your new mempool operations (ops). This is
      achieved by adding a new mempool ops source file into the
      librte_mempool library, and using the REGISTER_MEMPOOL_HANDLER macro.
   2. Using the new API to call rte_mempool_create_empty and
      rte_mempool_set_ops to create a new mempool
      using the name parameter to identify which ops to use.

New API calls added
  1. A new rte_mempool_create_empty() function
  2. rte_mempool_set_ops_byname() which sets the mempool's ops (functions)
  3. An rte_mempool_populate_default() and rte_mempool_populate_anon() 
functions
     which populates the mempool using the relevant ops

Several external mempool managers may be used in the same application. A new
mempool can then be created by using the new 'create' function, 
providing the
mempool ops struct name to point the mempool to the relevant mempool manager
callback structure.

The old 'create' function can still be called by legacy programs, and will
internally work out the mempool handle based on the flags provided (single
producer, single consumer, etc). By default handles are created 
internally to
implement the built-in DPDK mempool manager and mempool types.

The external mempool manager needs to provide the following functions.
  1. alloc     - allocates the mempool memory, and adds each object onto 
a ring
  2. put       - puts an object back into the mempool once an 
application has
                 finished with it
  3. get       - gets an object from the mempool for use by the application
  4. get_count - gets the number of available objects in the mempool
  5. free      - frees the mempool memory

Every time a get/put/get_count is called from the application/PMD, the
callback for that mempool is called. These functions are in the fastpath,
and any unoptimised ops may limit performance.

The new APIs are as follows:

1. rte_mempool_create_empty

struct rte_mempool *
rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
     unsigned cache_size, unsigned private_data_size,
     int socket_id, unsigned flags);

2. rte_mempool_set_ops_byname()

int
rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name);

3. rte_mempool_populate_default()

int rte_mempool_populate_default(struct rte_mempool *mp);

4. rte_mempool_populate_anon()

int rte_mempool_populate_anon(struct rte_mempool *mp);

Please see rte_mempool.h for further information on the parameters.


The important thing to note is that the mempool ops struct is passed by name
to rte_mempool_set_ops_byname, which looks through the ops struct array to
get the ops_index, which is then stored in the rte_memool structure. This
allow multiple processes to use the same mempool, as the function pointers
are accessed via ops index.

The mempool ops structure contains callbacks to the implementation of
the ops function, and is set up for registration as follows:

static const struct rte_mempool_ops ops_sp_mc = {
     .name = "ring_sp_mc",
     .alloc = rte_mempool_common_ring_alloc,
     .put = common_ring_sp_put,
     .get = common_ring_mc_get,
     .get_count = common_ring_get_count,
     .free = common_ring_free,
};

And then the following macro will register the ops in the array of ops
structures

REGISTER_MEMPOOL_OPS(ops_mp_mc);

For an example of API usage, please see app/test/test_mempool.c, which
implements a rudimentary "custom_handler" mempool manager using simple 
mallocs
for each mempool object. This file also contains the callbacks and self
registration for the new handler.

David Hunt (4):
   mempool: support external mempool operations
   mempool: remove rte_ring from rte_mempool struct
   mempool: add default external mempool ops
   mbuf: allow apps to change default mempool ops

Olivier Matz (1):
   app/test: test external mempool manager

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v6 1/5] mempool: support external handler
  2016-06-02 11:23               ` Hunt, David
@ 2016-06-02 13:43                 ` Jan Viktorin
  0 siblings, 0 replies; 238+ messages in thread
From: Jan Viktorin @ 2016-06-02 13:43 UTC (permalink / raw)
  To: Hunt, David; +Cc: dev, olivier.matz, jerin.jacob

On Thu, 2 Jun 2016 12:23:41 +0100
"Hunt, David" <david.hunt@intel.com> wrote:

> On 6/1/2016 6:54 PM, Jan Viktorin wrote:
> >
> >   		mp->populated_size--;
> > @@ -383,13 +349,16 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
> >   	unsigned i = 0;
> >   	size_t off;
> >   	struct rte_mempool_memhdr *memhdr;
> > -	int ret;
> >   
> >   	/* create the internal ring if not already done */
> >   	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
> > -		ret = rte_mempool_ring_create(mp);
> > -		if (ret < 0)
> > -			return ret;
> > +		rte_errno = 0;
> > +		mp->pool = rte_mempool_ops_alloc(mp);
> > +		if (mp->pool == NULL) {
> > +			if (rte_errno == 0)
> > +				return -EINVAL;
> > +			return -rte_errno;
> > Are you sure the rte_errno is a positive value?  
> 
> If the user returns EINVAL, or similar, we want to return negative, as 
> per the rest of DPDK.

Oh, yes... you're right.

> 
> >> @@ -204,9 +205,13 @@ struct rte_mempool_memhdr {
> >>   struct rte_mempool {
> >>   	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
> >>   	struct rte_ring *ring;           /**< Ring to store objects. */
> >> +	union {
> >> +		void *pool;              /**< Ring or pool to store objects */  
> > What about calling this pdata or priv? I think, it can improve some doc comments.
> > Describing a "pool" may refer to both the rte_mempool itself or to the mp->pool
> > pointer. The "priv" would help to understand the code a bit.
> >
> > Then the rte_mempool_alloc_t can be called rte_mempool_priv_alloc_t. Or something
> > similar. It's than clear enough, what the function should do...  
> 
> I'd lean towards pdata or maybe pool_data. Not sure about the function 
> name change though,
> the function does return a pool_data pointer, which we put into 
> mp->pool_data.

Yes. But from the name of the function, it is difficult to understand what is its
purpose.

> 
> >> +		uint64_t *pool_id;       /**< External mempool identifier */  
> > Why is pool_id a pointer? Is it a typo? I've understood it should be 64b wide
> > from the discussion (Olivier, Jerin, David):  
> 

[...]

> 
> >> +typedef void (*rte_mempool_free_t)(void *p);
> >> +
> >> +/**
> >> + * Put an object in the external pool.
> >> + * The *p pointer is the opaque data for a given mempool handler (ring,
> >> + * array, linked list, etc)  
> > The obj_table is not documented. Is it really a table? I'd called an array instead.  
> 
> You're probably right, but it's always been called obj_table, and I'm 
> not sure
> this patchset is the place to change it.  Maybe a follow up patch?

In that case, it's OK.

> 
> >> + */
> >> +typedef int (*rte_mempool_put_t)(void *p, void * const *obj_table, unsigned n);  
> > unsigned int  
> Done
> >  
> >> +
> >> +/** Get an object from the external pool. */
> >> +typedef int (*rte_mempool_get_t)(void *p, void **obj_table, unsigned n);  
> > unsigned int  
> Done
> >  
> >> +
> >> +/** Return the number of available objects in the external pool. */  
> > Is the number of available objects or the total number of all objects
> > (so probably a constant value)?  
> 
> It's intended to be the umber of available objects

OK, forget it.

> 
> >  
> >> +typedef unsigned (*rte_mempool_get_count)(void *p);  
> > Should it be const void *p?  
> 
> Yes, it could be. Changed.
> 
> >  

[...]

> 
> >> + * @return
> >> + *   - 0: Sucess; the new handler is configured.
> >> + *   - EINVAL - Invalid handler name provided
> >> + *   - EEXIST - mempool already has a handler assigned  
> > They are returned as -EINVAL and -EEXIST.
> >
> > IMHO, using "-" here is counter-intuitive:
> >
> >   - EINVAL
> >
> > does it mean a positive with "-" or negative value?  
> 
> EINVAL is positive, so it's returning negative. Common usage in DPDK, 
> afaics.

Yes, of course. But it is not so clear from the doc. I've already wrote
a code checking the positive error codes. That was because of no "minus
sign" in the doc. So, my code was wrong and it took me a while to find
out the reason... I supposed, the positive value was intentional. Finally,
I had to lookup the source code (the calling path) to verify...

> 
> 
> >> + */
> >> +int
> >> +rte_mempool_set_handler(struct rte_mempool *mp, const char *name);  
> > rte_mempool_set_ops
> >
> > What about rte_mempool_set_ops_byname? Not a big deal...  
> 
> I agree.  rte_mempool_set_ops_byname
> 
> >> +
> >> +/**
> >> + * Register an external pool handler.  
> > Register mempool operations  
> 
> Done
> 
> >> + *
> >> + * @param h
> >> + *   Pointer to the external pool handler
> >> + * @return
> >> + *   - >=0: Sucess; return the index of the handler in the table.
> >> + *   - EINVAL - missing callbacks while registering handler
> >> + *   - ENOSPC - the maximum number of handlers has been reached  
> > - -EINVAL
> > - -ENOSPC  
> 
> :)

Similar as above... If it's a DPDK standard to write it like this, then I am
OK with that.

> 

[...]


-- 
   Jan Viktorin                  E-mail: Viktorin@RehiveTech.com
   System Architect              Web:    www.RehiveTech.com
   RehiveTech
   Brno, Czech Republic

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v7 1/5] mempool: support external mempool operations
  2016-06-02 13:27             ` [PATCH v7 1/5] mempool: support external mempool operations David Hunt
  2016-06-02 13:38               ` [PATCH v7 0/5] mempool: add external mempool manager Hunt, David
@ 2016-06-03  6:38               ` Jerin Jacob
  2016-06-03 10:28                 ` Hunt, David
  2016-06-03 12:28               ` Olivier MATZ
  2 siblings, 1 reply; 238+ messages in thread
From: Jerin Jacob @ 2016-06-03  6:38 UTC (permalink / raw)
  To: David Hunt; +Cc: dev, olivier.matz, viktorin

On Thu, Jun 02, 2016 at 02:27:19PM +0100, David Hunt wrote:

[snip]

>  	/* create the internal ring if not already done */
>  	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {

|> 1) May be RING can be replaced with some other higher abstraction name
|> for the internal MEMPOOL_F_RING_CREATED flag
|
|Agreed. I'll change to MEMPOOL_F_POOL_CREATED, since we're already 
|changing the *ring
|element of the mempool struct to *pool

Looks like you have missed the above review comment?

[snip]

> static inline struct rte_mempool_ops *
> rte_mempool_ops_get(int ops_index)
>
> return &rte_mempool_ops_table.ops[ops_index];

|> 2) Considering "get" and "put" are the fast-path callbacks for
|> pool-manger, Is it possible to avoid the extra overhead of the
|> following
|> _load_ and additional cache line on each call,
|> rte_mempool_handler_table.handler[handler_idx]
|>
|> I understand it is for multiprocess support but I am thing can we
|> introduce something like ethernet API support for multiprocess and
|> resolve "put" and "get" functions pointer on init and store in
|> struct mempool. Some thinking like
|>
|> file: drivers/net/ixgbe/ixgbe_ethdev.c
|> search for if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
|
|I'll look at this one before posting the next version of the patch 
|(soon). :)

Have you checked the above comment, if it difficult then postpone it. But
IMO it will reduce few cycles in fast-path and reduce the cache usage in
fast path

[snip]

> +
> +/**
> + * @internal wrapper for external mempool manager put callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @param obj_table
> + *   Pointer to a table of void * pointers (objects).
> + * @param n
> + *   Number of objects to put.
> + * @return
> + *   - 0: Success; n objects supplied.
> + *   - <0: Error; code of put function.
> + */
> +static inline int
> +rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
> +		unsigned n)
> +{
> +	struct rte_mempool_ops *ops;
> +
> +	ops = rte_mempool_ops_get(mp->ops_index);
> +	return ops->put(mp->pool_data, obj_table, n);

Pass by value of "pool_data", On 32 bit systems, casting back to pool_id will
be an issue as void* on 32 bit is 4B. IMO, May be can use uint64_t to
pass by value and typecast to void* to fix it.

Jerin

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v7 1/5] mempool: support external mempool operations
  2016-06-03  6:38               ` [PATCH v7 1/5] mempool: support external mempool operations Jerin Jacob
@ 2016-06-03 10:28                 ` Hunt, David
  2016-06-03 10:49                   ` Jerin Jacob
  2016-06-03 11:07                   ` Olivier MATZ
  0 siblings, 2 replies; 238+ messages in thread
From: Hunt, David @ 2016-06-03 10:28 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dev, olivier.matz, viktorin



On 6/3/2016 7:38 AM, Jerin Jacob wrote:
> On Thu, Jun 02, 2016 at 02:27:19PM +0100, David Hunt wrote:
>
> [snip]
>
>>   	/* create the internal ring if not already done */
>>   	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
> |> 1) May be RING can be replaced with some other higher abstraction name
> |> for the internal MEMPOOL_F_RING_CREATED flag
> |
> |Agreed. I'll change to MEMPOOL_F_POOL_CREATED, since we're already
> |changing the *ring
> |element of the mempool struct to *pool
>
> Looks like you have missed the above review comment?

Ah, yes, I'll address in the next patch.

I've to remove some 'const's in the alloc functions anyway
which I introduced in the last revision, which I shouldn't have. Future 
handlers
(including the stack hander) will need to set the pool_data in the alloc.


>
> [snip]
>
>> static inline struct rte_mempool_ops *
>> rte_mempool_ops_get(int ops_index)
>>
>> return &rte_mempool_ops_table.ops[ops_index];
> |> 2) Considering "get" and "put" are the fast-path callbacks for
> |> pool-manger, Is it possible to avoid the extra overhead of the
> |> following
> |> _load_ and additional cache line on each call,
> |> rte_mempool_handler_table.handler[handler_idx]
> |>
> |> I understand it is for multiprocess support but I am thing can we
> |> introduce something like ethernet API support for multiprocess and
> |> resolve "put" and "get" functions pointer on init and store in
> |> struct mempool. Some thinking like
> |>
> |> file: drivers/net/ixgbe/ixgbe_ethdev.c
> |> search for if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
> |
> |I'll look at this one before posting the next version of the patch
> |(soon). :)
>
> Have you checked the above comment, if it difficult then postpone it. But
> IMO it will reduce few cycles in fast-path and reduce the cache usage in
> fast path
>
> [snip]

I looked at it, but I'd need to do some more digging into it to see how 
it can be
done properly. I'm not seeing any performance drop at the moment, and it 
may
be a good way to improve further down the line. Is it OK to postpone?

>> +
>> +/**
>> + * @internal wrapper for external mempool manager put callback.
>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + * @param obj_table
>> + *   Pointer to a table of void * pointers (objects).
>> + * @param n
>> + *   Number of objects to put.
>> + * @return
>> + *   - 0: Success; n objects supplied.
>> + *   - <0: Error; code of put function.
>> + */
>> +static inline int
>> +rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
>> +		unsigned n)
>> +{
>> +	struct rte_mempool_ops *ops;
>> +
>> +	ops = rte_mempool_ops_get(mp->ops_index);
>> +	return ops->put(mp->pool_data, obj_table, n);
> Pass by value of "pool_data", On 32 bit systems, casting back to pool_id will
> be an issue as void* on 32 bit is 4B. IMO, May be can use uint64_t to
> pass by value and typecast to void* to fix it.

OK. I see the problem. I'll see 4 callbacks that need to change, free, 
get, put and get_count.
So the callbacks will be:
typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp);
typedef void (*rte_mempool_free_t)(uint64_t p);
typedef int (*rte_mempool_put_t)(uint64_t p, void * const *obj_table, 
unsigned int n);
typedef int (*rte_mempool_get_t)(uint64_t p, void **obj_table, unsigned 
int n);
typedef unsigned (*rte_mempool_get_count)(uint64_t p);


Regards,
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v7 1/5] mempool: support external mempool operations
  2016-06-03 10:28                 ` Hunt, David
@ 2016-06-03 10:49                   ` Jerin Jacob
  2016-06-03 11:07                   ` Olivier MATZ
  1 sibling, 0 replies; 238+ messages in thread
From: Jerin Jacob @ 2016-06-03 10:49 UTC (permalink / raw)
  To: Hunt, David; +Cc: dev, olivier.matz, viktorin

On Fri, Jun 03, 2016 at 11:28:14AM +0100, Hunt, David wrote:
> > 
> > > static inline struct rte_mempool_ops *
> > > rte_mempool_ops_get(int ops_index)
> > > 
> > > return &rte_mempool_ops_table.ops[ops_index];
> > |> 2) Considering "get" and "put" are the fast-path callbacks for
> > |> pool-manger, Is it possible to avoid the extra overhead of the
> > |> following
> > |> _load_ and additional cache line on each call,
> > |> rte_mempool_handler_table.handler[handler_idx]
> > |>
> > |> I understand it is for multiprocess support but I am thing can we
> > |> introduce something like ethernet API support for multiprocess and
> > |> resolve "put" and "get" functions pointer on init and store in
> > |> struct mempool. Some thinking like
> > |>
> > |> file: drivers/net/ixgbe/ixgbe_ethdev.c
> > |> search for if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
> > |
> > |I'll look at this one before posting the next version of the patch
> > |(soon). :)
> > 
> > Have you checked the above comment, if it difficult then postpone it. But
> > IMO it will reduce few cycles in fast-path and reduce the cache usage in
> > fast path
> > 
> > [snip]
> 
> I looked at it, but I'd need to do some more digging into it to see how it
> can be
> done properly. I'm not seeing any performance drop at the moment, and it may
> be a good way to improve further down the line. Is it OK to postpone?

I am OK for fixing it later. Performance issue should come in the use
cases where mempool "local cache" gets overflows and "get" and "put."
function pointers being used. Like crypto and ethdev, fast path function
pointers can accommodate in the main structure itself rather than one
more indirection.

Jerin

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v7 1/5] mempool: support external mempool operations
  2016-06-03 10:28                 ` Hunt, David
  2016-06-03 10:49                   ` Jerin Jacob
@ 2016-06-03 11:07                   ` Olivier MATZ
  2016-06-03 11:42                     ` Jan Viktorin
  2016-06-03 12:10                     ` Hunt, David
  1 sibling, 2 replies; 238+ messages in thread
From: Olivier MATZ @ 2016-06-03 11:07 UTC (permalink / raw)
  To: Hunt, David, Jerin Jacob; +Cc: dev, viktorin



On 06/03/2016 12:28 PM, Hunt, David wrote:
> On 6/3/2016 7:38 AM, Jerin Jacob wrote:
>> On Thu, Jun 02, 2016 at 02:27:19PM +0100, David Hunt wrote:
>>> +/**
>>> + * @internal wrapper for external mempool manager put callback.
>>> + *
>>> + * @param mp
>>> + *   Pointer to the memory pool.
>>> + * @param obj_table
>>> + *   Pointer to a table of void * pointers (objects).
>>> + * @param n
>>> + *   Number of objects to put.
>>> + * @return
>>> + *   - 0: Success; n objects supplied.
>>> + *   - <0: Error; code of put function.
>>> + */
>>> +static inline int
>>> +rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const
>>> *obj_table,
>>> +        unsigned n)
>>> +{
>>> +    struct rte_mempool_ops *ops;
>>> +
>>> +    ops = rte_mempool_ops_get(mp->ops_index);
>>> +    return ops->put(mp->pool_data, obj_table, n);
>> Pass by value of "pool_data", On 32 bit systems, casting back to
>> pool_id will
>> be an issue as void* on 32 bit is 4B. IMO, May be can use uint64_t to
>> pass by value and typecast to void* to fix it.
> 
> OK. I see the problem. I'll see 4 callbacks that need to change, free,
> get, put and get_count.
> So the callbacks will be:
> typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp);
> typedef void (*rte_mempool_free_t)(uint64_t p);
> typedef int (*rte_mempool_put_t)(uint64_t p, void * const *obj_table,
> unsigned int n);
> typedef int (*rte_mempool_get_t)(uint64_t p, void **obj_table, unsigned
> int n);
> typedef unsigned (*rte_mempool_get_count)(uint64_t p);

I don't quite like the uint64_t argument (I exepect that most handlers
will use a pointer, and they will have to do a cast). What about giving
a (struct rte_mempool *) instead? The handler function would then
select between void * or uint64_t without a cast.
In that case, maybe the prototype of alloc should be:

  typedef int (*rte_mempool_alloc_t)(struct rte_mempool *mp);

It would directly set mp->pool_data and return 0 or -errno.

By the way, I found a strange thing:

> typedef void (*rte_mempool_free_t)(void *p);

[...]

> void
> rte_mempool_ops_free(struct rte_mempool *mp)
> {
> struct rte_mempool_ops *ops;
> 
> ops = rte_mempool_ops_get(mp->ops_index);
> if (ops->free == NULL)
> return;
> return ops->free(mp);
> }
> 

Seems that the free cb expects mp->pool_data, but mp is passed.



Another alternative to the "uint64_t or ptr" question would be to use
a uintptr_t instead of a uint64_t. This is won't be possible if it need
to be a 64 bits value even on 32 bits architectures. We could then keep
the argument as pointer, and cast it to uintptr_t if needed.

Regards,
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v7 1/5] mempool: support external mempool operations
  2016-06-03 11:07                   ` Olivier MATZ
@ 2016-06-03 11:42                     ` Jan Viktorin
  2016-06-03 12:10                     ` Hunt, David
  1 sibling, 0 replies; 238+ messages in thread
From: Jan Viktorin @ 2016-06-03 11:42 UTC (permalink / raw)
  To: Olivier MATZ, Hunt, David, Jerin Jacob; +Cc: dev

Hello,

sorry for top-posting. I vote for passing rte_mempool instead of void *. This is what I've already pointed at, a kind of type-safety...‎ Passing uint64_t just hides the problem. Another way is to name the union and pass it as eg. union rte_mempool_priv * to the callbacks.

Jan Viktorin
RehiveTech
Sent from a mobile device
  Původní zpráva  
Od: Olivier MATZ
Odesláno: pátek, 3. června 2016 13:08
Komu: Hunt, David; Jerin Jacob
Kopie: dev@dpdk.org; viktorin@rehivetech.com
Předmět: Re: [PATCH v7 1/5] mempool: support external mempool operations



On 06/03/2016 12:28 PM, Hunt, David wrote:
> On 6/3/2016 7:38 AM, Jerin Jacob wrote:
>> On Thu, Jun 02, 2016 at 02:27:19PM +0100, David Hunt wrote:
>>> +/**
>>> + * @internal wrapper for external mempool manager put callback.
>>> + *
>>> + * @param mp
>>> + * Pointer to the memory pool.
>>> + * @param obj_table
>>> + * Pointer to a table of void * pointers (objects).
>>> + * @param n
>>> + * Number of objects to put.
>>> + * @return
>>> + * - 0: Success; n objects supplied.
>>> + * - <0: Error; code of put function.
>>> + */
>>> +static inline int
>>> +rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const
>>> *obj_table,
>>> + unsigned n)
>>> +{
>>> + struct rte_mempool_ops *ops;
>>> +
>>> + ops = rte_mempool_ops_get(mp->ops_index);
>>> + return ops->put(mp->pool_data, obj_table, n);
>> Pass by value of "pool_data", On 32 bit systems, casting back to
>> pool_id will
>> be an issue as void* on 32 bit is 4B. IMO, May be can use uint64_t to
>> pass by value and typecast to void* to fix it.
> 
> OK. I see the problem. I'll see 4 callbacks that need to change, free,
> get, put and get_count.
> So the callbacks will be:
> typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp);
> typedef void (*rte_mempool_free_t)(uint64_t p);
> typedef int (*rte_mempool_put_t)(uint64_t p, void * const *obj_table,
> unsigned int n);
> typedef int (*rte_mempool_get_t)(uint64_t p, void **obj_table, unsigned
> int n);
> typedef unsigned (*rte_mempool_get_count)(uint64_t p);

I don't quite like the uint64_t argument (I exepect that most handlers
will use a pointer, and they will have to do a cast). What about giving
a (struct rte_mempool *) instead? The handler function would then
select between void * or uint64_t without a cast.
In that case, maybe the prototype of alloc should be:

typedef int (*rte_mempool_alloc_t)(struct rte_mempool *mp);

It would directly set mp->pool_data and return 0 or -errno.

By the way, I found a strange thing:

> typedef void (*rte_mempool_free_t)(void *p);

[...]

> void
> rte_mempool_ops_free(struct rte_mempool *mp)
> {
> struct rte_mempool_ops *ops;
> 
> ops = rte_mempool_ops_get(mp->ops_index);
> if (ops->free == NULL)
> return;
> return ops->free(mp);
> }
> 

Seems that the free cb expects mp->pool_data, but mp is passed.



Another alternative to the "uint64_t or ptr" question would be to use
a uintptr_t instead of a uint64_t. This is won't be possible if it need
to be a 64 bits value even on 32 bits architectures. We could then keep
the argument as pointer, and cast it to uintptr_t if needed.

Regards,
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v7 1/5] mempool: support external mempool operations
  2016-06-03 11:07                   ` Olivier MATZ
  2016-06-03 11:42                     ` Jan Viktorin
@ 2016-06-03 12:10                     ` Hunt, David
  1 sibling, 0 replies; 238+ messages in thread
From: Hunt, David @ 2016-06-03 12:10 UTC (permalink / raw)
  To: Olivier MATZ, Jerin Jacob; +Cc: dev, viktorin



On 6/3/2016 12:07 PM, Olivier MATZ wrote:
>
> On 06/03/2016 12:28 PM, Hunt, David wrote:
>> On 6/3/2016 7:38 AM, Jerin Jacob wrote:
>>> On Thu, Jun 02, 2016 at 02:27:19PM +0100, David Hunt wrote:
>>>> +/**
>>>> + * @internal wrapper for external mempool manager put callback.
>>>> + *
>>>> + * @param mp
>>>> + *   Pointer to the memory pool.
>>>> + * @param obj_table
>>>> + *   Pointer to a table of void * pointers (objects).
>>>> + * @param n
>>>> + *   Number of objects to put.
>>>> + * @return
>>>> + *   - 0: Success; n objects supplied.
>>>> + *   - <0: Error; code of put function.
>>>> + */
>>>> +static inline int
>>>> +rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const
>>>> *obj_table,
>>>> +        unsigned n)
>>>> +{
>>>> +    struct rte_mempool_ops *ops;
>>>> +
>>>> +    ops = rte_mempool_ops_get(mp->ops_index);
>>>> +    return ops->put(mp->pool_data, obj_table, n);
>>> Pass by value of "pool_data", On 32 bit systems, casting back to
>>> pool_id will
>>> be an issue as void* on 32 bit is 4B. IMO, May be can use uint64_t to
>>> pass by value and typecast to void* to fix it.
>> OK. I see the problem. I'll see 4 callbacks that need to change, free,
>> get, put and get_count.
>> So the callbacks will be:
>> typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp);
>> typedef void (*rte_mempool_free_t)(uint64_t p);
>> typedef int (*rte_mempool_put_t)(uint64_t p, void * const *obj_table,
>> unsigned int n);
>> typedef int (*rte_mempool_get_t)(uint64_t p, void **obj_table, unsigned
>> int n);
>> typedef unsigned (*rte_mempool_get_count)(uint64_t p);
> I don't quite like the uint64_t argument (I exepect that most handlers
> will use a pointer, and they will have to do a cast). What about giving
> a (struct rte_mempool *) instead? The handler function would then
> select between void * or uint64_t without a cast.
> In that case, maybe the prototype of alloc should be:
>
>    typedef int (*rte_mempool_alloc_t)(struct rte_mempool *mp);
>
> It would directly set mp->pool_data and return 0 or -errno.

I would tend to agree. The uint64 didn't sit well with me :)
I would prefer the rte_mempool*


> By the way, I found a strange thing:
>
>> typedef void (*rte_mempool_free_t)(void *p);

Yes, I spotted that earler, will be fixed in next version


> [...]
>
>> void
>> rte_mempool_ops_free(struct rte_mempool *mp)
>> {
>> struct rte_mempool_ops *ops;
>>
>> ops = rte_mempool_ops_get(mp->ops_index);
>> if (ops->free == NULL)
>> return;
>> return ops->free(mp);
>> }
>>
> Seems that the free cb expects mp->pool_data, but mp is passed.

Working in it.

>
>
> Another alternative to the "uint64_t or ptr" question would be to use
> a uintptr_t instead of a uint64_t. This is won't be possible if it need
> to be a 64 bits value even on 32 bits architectures. We could then keep
> the argument as pointer, and cast it to uintptr_t if needed.

I had assumed that the requirement was for 64 bits even on 32 bit OS's. 
I've implemented
the double cast of the u64 to uintptr_t to struct pointer  done to avoid 
compiler warnings on
32-bit but I really prefer the solution of passing the rte_mempool 
pointer instead. I'll change to
*mp.

Regards,
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v7 1/5] mempool: support external mempool operations
  2016-06-02 13:27             ` [PATCH v7 1/5] mempool: support external mempool operations David Hunt
  2016-06-02 13:38               ` [PATCH v7 0/5] mempool: add external mempool manager Hunt, David
  2016-06-03  6:38               ` [PATCH v7 1/5] mempool: support external mempool operations Jerin Jacob
@ 2016-06-03 12:28               ` Olivier MATZ
  2 siblings, 0 replies; 238+ messages in thread
From: Olivier MATZ @ 2016-06-03 12:28 UTC (permalink / raw)
  To: David Hunt, dev; +Cc: viktorin, jerin.jacob

Hi,

Some comments below.

On 06/02/2016 03:27 PM, David Hunt wrote:
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> +/**
> + * prototype for implementation specific data provisioning function
> + * The function should provide the implementation specific memory for
> + * for use by the other mempool ops functions in a given mempool ops struct.
> + * E.g. the default ops provides an instance of the rte_ring for this purpose.
> + * it will mostlikely point to a different type of data structure, and
> + * will be transparent to the application programmer.
> + * The function should also not touch the given *mp instance.
> + */
> +typedef void *(*rte_mempool_alloc_t)(const struct rte_mempool *mp);

nit: about doxygen comments, it's better to have a one-line description,
then a blank line, then the full description. While there, please also
check the uppercase at the beginning and the dot when relevant.


> +/**
> + * Structure storing the table of registered ops structs, each of which contain
> + * the function pointers for the mempool ops functions.
> + * Each process has it's own storage for this ops struct aray so that

it's -> its
aray -> array


> + * the mempools can be shared across primary and secondary processes.
> + * The indices used to access the array are valid across processes, whereas
> + * any function pointers stored directly in the mempool struct would not be.
> + * This results in us simply having "ops_index" in the mempool struct.
> + */
> +struct rte_mempool_ops_table {
> +	rte_spinlock_t sl;     /**< Spinlock for add/delete. */
> +	uint32_t num_ops;      /**< Number of used ops structs in the table. */
> +	/**
> +	 * Storage for all possible ops structs.
> +	 */
> +	struct rte_mempool_ops ops[RTE_MEMPOOL_MAX_OPS_IDX];
> +} __rte_cache_aligned;
> +
> +/** Array of registered ops structs */
> +extern struct rte_mempool_ops_table rte_mempool_ops_table;
> +
> +/**
> + * @internal Get the mempool ops struct from its index.
> + *
> + * @param ops_index
> + *   The index of the ops struct in the ops struct table. It must be a valid
> + *   index: (0 <= idx < num_ops).
> + * @return
> + *   The pointer to the ops struct in the table.
> + */
> +static inline struct rte_mempool_ops *
> +rte_mempool_ops_get(int ops_index)
> +{
> +	return &rte_mempool_ops_table.ops[ops_index];
> +}
> +
> +/**
> + * @internal wrapper for external mempool manager alloc callback.

wrapper for mempool_ops alloc callback
?
(same for other functions)


> @@ -922,7 +1124,7 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
>   */
>  static inline int __attribute__((always_inline))
>  __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
> -		   unsigned n, int is_mc)
> +		   unsigned int n, int is_mc)
>  {
>  	int ret;
>  	struct rte_mempool_cache *cache;

Despite "unsigned" is not conform to current checkpatch policy, I
don't think it should be modified in this patch.


> --- /dev/null
> +++ b/lib/librte_mempool/rte_mempool_ops.c

> +
> +#include <rte_mempool.h>
> +
> +/* indirect jump table to support external memory pools */
> +struct rte_mempool_ops_table rte_mempool_ops_table = {
> +	.sl =  RTE_SPINLOCK_INITIALIZER ,
> +	.num_ops = 0
> +};
> +
> +/* add a new ops struct in rte_mempool_ops_table, return its index */
> +int
> +rte_mempool_ops_register(const struct rte_mempool_ops *h)

nit: "h" should be "ops" :)


> +{
> +	struct rte_mempool_ops *ops;
> +	int16_t ops_index;
> +
> +	rte_spinlock_lock(&rte_mempool_ops_table.sl);
> +
> +	if (rte_mempool_ops_table.num_ops >=
> +			RTE_MEMPOOL_MAX_OPS_IDX) {
> +		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
> +		RTE_LOG(ERR, MEMPOOL,
> +			"Maximum number of mempool ops structs exceeded\n");
> +		return -ENOSPC;
> +	}
> +
> +	if (h->put == NULL || h->get == NULL || h->get_count == NULL) {
> +		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
> +		RTE_LOG(ERR, MEMPOOL,
> +			"Missing callback while registering mempool ops\n");
> +		return -EINVAL;
> +	}
> +
> +	ops_index = rte_mempool_ops_table.num_ops++;
> +	ops = &rte_mempool_ops_table.ops[ops_index];
> +	snprintf(ops->name, sizeof(ops->name), "%s", h->name);

I think we should check for truncation here, as it was done in:
85cf00791cca ("mem: avoid memzone/mempool/ring name truncation")
(this should be done before the num_ops++)


Regards,
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v7 2/5] mempool: remove rte_ring from rte_mempool struct
  2016-06-02 13:27             ` [PATCH v7 2/5] mempool: remove rte_ring from rte_mempool struct David Hunt
@ 2016-06-03 12:28               ` Olivier MATZ
  2016-06-03 14:17                 ` Hunt, David
  0 siblings, 1 reply; 238+ messages in thread
From: Olivier MATZ @ 2016-06-03 12:28 UTC (permalink / raw)
  To: David Hunt, dev; +Cc: viktorin, jerin.jacob



On 06/02/2016 03:27 PM, David Hunt wrote:
> Now that we're moving to an external mempoool handler, which
> uses a void *pool_data as a pointer to the pool data, remove the
> unneeded ring pointer from the mempool struct.
> 
> Signed-off-by: David Hunt <david.hunt@intel.com>
> ---
>  app/test/test_mempool_perf.c     | 1 -
>  lib/librte_mempool/rte_mempool.h | 1 -
>  2 files changed, 2 deletions(-)
> 
> diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
> index cdc02a0..091c1df 100644
> --- a/app/test/test_mempool_perf.c
> +++ b/app/test/test_mempool_perf.c
> @@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
>  							   n_get_bulk);
>  				if (unlikely(ret < 0)) {
>  					rte_mempool_dump(stdout, mp);
> -					rte_ring_dump(stdout, mp->ring);
>  					/* in this case, objects are lost... */
>  					return -1;
>  				}
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index a6b28b0..c33eeb8 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -204,7 +204,6 @@ struct rte_mempool_memhdr {
>   */
>  struct rte_mempool {
>  	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
> -	struct rte_ring *ring;           /**< Ring to store objects. */
>  	union {
>  		void *pool_data;         /**< Ring or pool to store objects */
>  		uint64_t pool_id;        /**< External mempool identifier */
> 

Sorry if I missed it in previous discussions, but I don't really
see the point of having this in a separate commit, as the goal
of the previous commit is to replace the ring by configurable ops.

Moreover, after applying only the previous commit, the
call to rte_ring_dump(stdout, mp->ring) would probably crash
sine ring is NULL.

I think this comment also applies to the next commit. Splitting
between functionalities is good, but in this case I think the 3
commits are linked together, and it should not break compilation
or tests to facilitate the git bisect.


Regards,
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v7 5/5] mbuf: allow apps to change default mempool ops
  2016-06-02 13:27             ` [PATCH v7 5/5] mbuf: allow apps to change default mempool ops David Hunt
@ 2016-06-03 12:28               ` Olivier MATZ
  2016-06-03 14:06                 ` Hunt, David
  0 siblings, 1 reply; 238+ messages in thread
From: Olivier MATZ @ 2016-06-03 12:28 UTC (permalink / raw)
  To: David Hunt, dev; +Cc: viktorin, jerin.jacob

> [PATCH v7 5/5] mbuf: allow apps to change default mempool ops

Should the title be fixed?
I don't feel this allows application to change the default ops.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v7 5/5] mbuf: allow apps to change default mempool ops
  2016-06-03 12:28               ` Olivier MATZ
@ 2016-06-03 14:06                 ` Hunt, David
  2016-06-03 14:10                   ` Olivier Matz
  0 siblings, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-06-03 14:06 UTC (permalink / raw)
  To: Olivier MATZ, dev; +Cc: viktorin, jerin.jacob



On 6/3/2016 1:28 PM, Olivier MATZ wrote:
>> [PATCH v7 5/5] mbuf: allow apps to change default mempool ops
> Should the title be fixed?
> I don't feel this allows application to change the default ops.

Allow _user_ to change default mempool ops, I think.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v7 5/5] mbuf: allow apps to change default mempool ops
  2016-06-03 14:06                 ` Hunt, David
@ 2016-06-03 14:10                   ` Olivier Matz
  2016-06-03 14:14                     ` Hunt, David
  0 siblings, 1 reply; 238+ messages in thread
From: Olivier Matz @ 2016-06-03 14:10 UTC (permalink / raw)
  To: Hunt, David, dev; +Cc: viktorin, jerin.jacob



On 06/03/2016 04:06 PM, Hunt, David wrote:
> 
> 
> On 6/3/2016 1:28 PM, Olivier MATZ wrote:
>>> [PATCH v7 5/5] mbuf: allow apps to change default mempool ops
>> Should the title be fixed?
>> I don't feel this allows application to change the default ops.
> 
> Allow _user_ to change default mempool ops, I think.

make default mempool ops configurable at build
 ?

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v7 5/5] mbuf: allow apps to change default mempool ops
  2016-06-03 14:10                   ` Olivier Matz
@ 2016-06-03 14:14                     ` Hunt, David
  0 siblings, 0 replies; 238+ messages in thread
From: Hunt, David @ 2016-06-03 14:14 UTC (permalink / raw)
  To: Olivier Matz, dev; +Cc: viktorin, jerin.jacob



On 6/3/2016 3:10 PM, Olivier Matz wrote:
>
> On 06/03/2016 04:06 PM, Hunt, David wrote:
>>
>> On 6/3/2016 1:28 PM, Olivier MATZ wrote:
>>>> [PATCH v7 5/5] mbuf: allow apps to change default mempool ops
>>> Should the title be fixed?
>>> I don't feel this allows application to change the default ops.
>> Allow _user_ to change default mempool ops, I think.
> make default mempool ops configurable at build
>   ?

Yup :)

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v7 2/5] mempool: remove rte_ring from rte_mempool struct
  2016-06-03 12:28               ` Olivier MATZ
@ 2016-06-03 14:17                 ` Hunt, David
  0 siblings, 0 replies; 238+ messages in thread
From: Hunt, David @ 2016-06-03 14:17 UTC (permalink / raw)
  To: Olivier MATZ, dev; +Cc: viktorin, jerin.jacob



On 6/3/2016 1:28 PM, Olivier MATZ wrote:
>
> On 06/02/2016 03:27 PM, David Hunt wrote:
>> Now that we're moving to an external mempoool handler, which
>> uses a void *pool_data as a pointer to the pool data, remove the
>> unneeded ring pointer from the mempool struct.
>>
>> Signed-off-by: David Hunt <david.hunt@intel.com>
>> ---
>>   app/test/test_mempool_perf.c     | 1 -
>>   lib/librte_mempool/rte_mempool.h | 1 -
>>   2 files changed, 2 deletions(-)
>>
>> diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
>> index cdc02a0..091c1df 100644
>> --- a/app/test/test_mempool_perf.c
>> +++ b/app/test/test_mempool_perf.c
>> @@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
>>   							   n_get_bulk);
>>   				if (unlikely(ret < 0)) {
>>   					rte_mempool_dump(stdout, mp);
>> -					rte_ring_dump(stdout, mp->ring);
>>   					/* in this case, objects are lost... */
>>   					return -1;
>>   				}
>> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
>> index a6b28b0..c33eeb8 100644
>> --- a/lib/librte_mempool/rte_mempool.h
>> +++ b/lib/librte_mempool/rte_mempool.h
>> @@ -204,7 +204,6 @@ struct rte_mempool_memhdr {
>>    */
>>   struct rte_mempool {
>>   	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
>> -	struct rte_ring *ring;           /**< Ring to store objects. */
>>   	union {
>>   		void *pool_data;         /**< Ring or pool to store objects */
>>   		uint64_t pool_id;        /**< External mempool identifier */
>>
> Sorry if I missed it in previous discussions, but I don't really
> see the point of having this in a separate commit, as the goal
> of the previous commit is to replace the ring by configurable ops.
>
> Moreover, after applying only the previous commit, the
> call to rte_ring_dump(stdout, mp->ring) would probably crash
> sine ring is NULL.
>
> I think this comment also applies to the next commit. Splitting
> between functionalities is good, but in this case I think the 3
> commits are linked together, and it should not break compilation
> or tests to facilitate the git bisect.
>
>
> Regards,
> Olivier

Yes. Originally there was a lot of discussion to split out the bigger 
patch, which I
did, and it was easier to review, but I think that now we're (very) 
close to final
revision, I can merge those three back into one.

Thanks,
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v8 0/5] mempool: add external mempool manager
  2016-06-02 13:27           ` [PATCH v7 0/5] mempool: add external mempool manager David Hunt
                               ` (4 preceding siblings ...)
  2016-06-02 13:27             ` [PATCH v7 5/5] mbuf: allow apps to change default mempool ops David Hunt
@ 2016-06-03 14:58             ` David Hunt
  2016-06-03 14:58               ` [PATCH v8 1/3] mempool: support external mempool operations David Hunt
                                 ` (3 more replies)
  5 siblings, 4 replies; 238+ messages in thread
From: David Hunt @ 2016-06-03 14:58 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob

Here's the latest version of the External Mempool Manager patchset.
It's re-based on top of the latest head as of 19/5/2016, including
Olivier's 35-part patch series on mempool re-org [1]

[1] http://dpdk.org/ml/archives/dev/2016-May/039229.html

v8 changes:

 * merged first three patches in the series into one.
 * changed parameters to ops callback to all be rte_mempool pointer
   rather than than pointer to opaque data or uint64.
 * comment fixes.
 * fixed parameter to _free function (was inconsistent).
 * changed MEMPOOL_F_RING_CREATED to MEMPOOL_F_POOL_CREATED

v7 changes:

 * Changed rte_mempool_handler_table to rte_mempool_ops_table
 * Changed hander_idx to ops_index in rte_mempool struct
 * Reworked comments in rte_mempool.h around ops functions
 * Changed rte_mempool_hander.c to rte_mempool_ops.c
 * Changed all functions containing _handler_ to _ops_
 * Now there is no mention of 'handler' left
 * Other small changes out of review of mailing list

v6 changes:

 * Moved the flags handling from rte_mempool_create_empty to
   rte_mempool_create, as it's only there for backward compatibility
 * Various comment additions and cleanup
 * Renamed rte_mempool_handler to rte_mempool_ops
 * Added a union for *pool and u64 pool_id in struct rte_mempool
 * split the original patch into a few parts for easier review.
 * rename functions with _ext_ to _ops_.
 * addressed review comments
 * renamed put and get functions to enqueue and dequeue
 * changed occurences of rte_mempool_ops to const, as they
   contain function pointers (security)
 * split out the default external mempool handler into a separate
   patch for easier review

v5 changes:
 * rebasing, as it is dependent on another patch series [1]

v4 changes (Olivier Matz):
 * remove the rte_mempool_create_ext() function. To change the handler, the
   user has to do the following:
   - mp = rte_mempool_create_empty()
   - rte_mempool_set_handler(mp, "my_handler")
   - rte_mempool_populate_default(mp)
   This avoids to add another function with more than 10 arguments, duplicating
   the doxygen comments
 * change the api of rte_mempool_alloc_t: only the mempool pointer is required
   as all information is available in it
 * change the api of rte_mempool_free_t: remove return value
 * move inline wrapper functions from the .c to the .h (else they won't be
   inlined). This implies to have one header file (rte_mempool.h), or it
   would have generate cross dependencies issues.
 * remove now unused MEMPOOL_F_INT_HANDLER (note: it was misused anyway due
   to the use of && instead of &)
 * fix build in debug mode (__MEMPOOL_STAT_ADD(mp, put_pool, n) remaining)
 * fix build with shared libraries (global handler has to be declared in
   the .map file)
 * rationalize #include order
 * remove unused function rte_mempool_get_handler_name()
 * rename some structures, fields, functions
 * remove the static in front of rte_tailq_elem rte_mempool_tailq (comment
   from Yuanhan)
 * test the ext mempool handler in the same file than standard mempool tests,
   avoiding to duplicate the code
 * rework the custom handler in mempool_test
 * rework a bit the patch selecting default mbuf pool handler
 * fix some doxygen comments

v3 changes:
 * simplified the file layout, renamed to rte_mempool_handler.[hc]
 * moved the default handlers into rte_mempool_default.c
 * moved the example handler out into app/test/test_ext_mempool.c
 * removed is_mc/is_mp change, slight perf degredation on sp cached operation
 * removed stack hanler, may re-introduce at a later date
 * Changes out of code reviews

v2 changes:
 * There was a lot of duplicate code between rte_mempool_xmem_create and
   rte_mempool_create_ext. This has now been refactored and is now
   hopefully cleaner.
 * The RTE_NEXT_ABI define is now used to allow building of the library
   in a format that is compatible with binaries built against previous
   versions of DPDK.
 * Changes out of code reviews. Hopefully I've got most of them included.

The External Mempool Manager is an extension to the mempool API that allows
users to add and use an external mempool manager, which allows external memory
subsystems such as external hardware memory management systems and software
based memory allocators to be used with DPDK.

The existing API to the internal DPDK mempool manager will remain unchanged
and will be backward compatible. However, there will be an ABI breakage, as
the mempool struct is changing. These changes are all contained withing
RTE_NEXT_ABI defs, and the current or next code can be changed with
the CONFIG_RTE_NEXT_ABI config setting

There are two aspects to external mempool manager.
  1. Adding the code for your new mempool operations (ops). This is
     achieved by adding a new mempool ops source file into the
     librte_mempool library, and using the REGISTER_MEMPOOL_HANDLER macro.
  2. Using the new API to call rte_mempool_create_empty and
     rte_mempool_set_ops to create a new mempool
     using the name parameter to identify which ops to use.

New API calls added
 1. A new rte_mempool_create_empty() function
 2. rte_mempool_set_ops_byname() which sets the mempool's ops (functions)
 3. An rte_mempool_populate_default() and rte_mempool_populate_anon() functions
    which populates the mempool using the relevant ops

Several external mempool managers may be used in the same application. A new
mempool can then be created by using the new 'create' function, providing the
mempool ops struct name to point the mempool to the relevant mempool manager
callback structure.

The old 'create' function can still be called by legacy programs, and will
internally work out the mempool handle based on the flags provided (single
producer, single consumer, etc). By default handles are created internally to
implement the built-in DPDK mempool manager and mempool types.

The external mempool manager needs to provide the following functions.
 1. alloc     - allocates the mempool memory, and adds each object onto a ring
 2. put       - puts an object back into the mempool once an application has
                finished with it
 3. get       - gets an object from the mempool for use by the application
 4. get_count - gets the number of available objects in the mempool
 5. free      - frees the mempool memory

Every time a get/put/get_count is called from the application/PMD, the
callback for that mempool is called. These functions are in the fastpath,
and any unoptimised ops may limit performance.

The new APIs are as follows:

1. rte_mempool_create_empty

struct rte_mempool *
rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
    unsigned cache_size, unsigned private_data_size,
    int socket_id, unsigned flags);

2. rte_mempool_set_ops_byname()

int
rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name);

3. rte_mempool_populate_default()

int rte_mempool_populate_default(struct rte_mempool *mp);

4. rte_mempool_populate_anon()

int rte_mempool_populate_anon(struct rte_mempool *mp);

Please see rte_mempool.h for further information on the parameters.


The important thing to note is that the mempool ops struct is passed by name
to rte_mempool_set_ops_byname, which looks through the ops struct array to
get the ops_index, which is then stored in the rte_memool structure. This
allow multiple processes to use the same mempool, as the function pointers
are accessed via ops index.

The mempool ops structure contains callbacks to the implementation of
the ops function, and is set up for registration as follows:

static const struct rte_mempool_ops ops_sp_mc = {
    .name = "ring_sp_mc",
    .alloc = rte_mempool_common_ring_alloc,
    .put = common_ring_sp_put,
    .get = common_ring_mc_get,
    .get_count = common_ring_get_count,
    .free = common_ring_free,
};

And then the following macro will register the ops in the array of ops
structures

REGISTER_MEMPOOL_OPS(ops_mp_mc);

For an example of API usage, please see app/test/test_mempool.c, which
implements a rudimentary "custom_handler" mempool manager using simple mallocs
for each mempool object. This file also contains the callbacks and self
registration for the new handler.

David Hunt (2):
  mempool: support external mempool operations
  mbuf: make default mempool ops configurable at build

Olivier Matz (1):
  app/test: test external mempool manager

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-03 14:58             ` [PATCH v8 0/5] mempool: add external mempool manager David Hunt
@ 2016-06-03 14:58               ` David Hunt
  2016-06-06 14:32                 ` Shreyansh Jain
                                   ` (4 more replies)
  2016-06-03 14:58               ` [PATCH v8 2/3] app/test: test external mempool manager David Hunt
                                 ` (2 subsequent siblings)
  3 siblings, 5 replies; 238+ messages in thread
From: David Hunt @ 2016-06-03 14:58 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, David Hunt

Until now, the objects stored in a mempool were internally stored in a
ring. This patch introduces the possibility to register external handlers
replacing the ring.

The default behavior remains unchanged, but calling the new function
rte_mempool_set_handler() right after rte_mempool_create_empty() allows
the user to change the handler that will be used when populating
the mempool.

This patch also adds a set of default ops (function callbacks) based
on rte_ring.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
 app/test/test_mempool_perf.c             |   1 -
 lib/librte_mempool/Makefile              |   2 +
 lib/librte_mempool/rte_mempool.c         |  73 ++++-----
 lib/librte_mempool/rte_mempool.h         | 247 ++++++++++++++++++++++++++++---
 lib/librte_mempool/rte_mempool_default.c | 157 ++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c     | 149 +++++++++++++++++++
 6 files changed, 562 insertions(+), 67 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_default.c
 create mode 100644 lib/librte_mempool/rte_mempool_ops.c

diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index cdc02a0..091c1df 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
 							   n_get_bulk);
 				if (unlikely(ret < 0)) {
 					rte_mempool_dump(stdout, mp);
-					rte_ring_dump(stdout, mp->ring);
 					/* in this case, objects are lost... */
 					return -1;
 				}
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 43423e0..8cac29b 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -42,6 +42,8 @@ LIBABIVER := 2
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
 # install includes
 SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
 
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index b54de43..eb74e25 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -148,7 +148,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, phys_addr_t physaddr)
 #endif
 
 	/* enqueue in ring */
-	rte_ring_sp_enqueue(mp->ring, obj);
+	rte_mempool_ops_enqueue_bulk(mp, &obj, 1);
 }
 
 /* call obj_cb() for each mempool element */
@@ -303,40 +303,6 @@ rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	return (size_t)paddr_idx << pg_shift;
 }
 
-/* create the internal ring */
-static int
-rte_mempool_ring_create(struct rte_mempool *mp)
-{
-	int rg_flags = 0, ret;
-	char rg_name[RTE_RING_NAMESIZE];
-	struct rte_ring *r;
-
-	ret = snprintf(rg_name, sizeof(rg_name),
-		RTE_MEMPOOL_MZ_FORMAT, mp->name);
-	if (ret < 0 || ret >= (int)sizeof(rg_name))
-		return -ENAMETOOLONG;
-
-	/* ring flags */
-	if (mp->flags & MEMPOOL_F_SP_PUT)
-		rg_flags |= RING_F_SP_ENQ;
-	if (mp->flags & MEMPOOL_F_SC_GET)
-		rg_flags |= RING_F_SC_DEQ;
-
-	/* Allocate the ring that will be used to store objects.
-	 * Ring functions will return appropriate errors if we are
-	 * running as a secondary process etc., so no checks made
-	 * in this function for that condition.
-	 */
-	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
-		mp->socket_id, rg_flags);
-	if (r == NULL)
-		return -rte_errno;
-
-	mp->ring = r;
-	mp->flags |= MEMPOOL_F_RING_CREATED;
-	return 0;
-}
-
 /* free a memchunk allocated with rte_memzone_reserve() */
 static void
 rte_mempool_memchunk_mz_free(__rte_unused struct rte_mempool_memhdr *memhdr,
@@ -354,7 +320,7 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
 	void *elt;
 
 	while (!STAILQ_EMPTY(&mp->elt_list)) {
-		rte_ring_sc_dequeue(mp->ring, &elt);
+		rte_mempool_ops_dequeue_bulk(mp, &elt, 1);
 		(void)elt;
 		STAILQ_REMOVE_HEAD(&mp->elt_list, next);
 		mp->populated_size--;
@@ -383,13 +349,16 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	unsigned i = 0;
 	size_t off;
 	struct rte_mempool_memhdr *memhdr;
-	int ret;
 
 	/* create the internal ring if not already done */
-	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
-		ret = rte_mempool_ring_create(mp);
-		if (ret < 0)
-			return ret;
+	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
+		rte_errno = 0;
+		mp->pool_data = rte_mempool_ops_alloc(mp);
+		if (mp->pool_data == NULL) {
+			if (rte_errno == 0)
+				return -EINVAL;
+			return -rte_errno;
+		}
 	}
 
 	/* mempool is already populated */
@@ -703,7 +672,7 @@ rte_mempool_free(struct rte_mempool *mp)
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
 
 	rte_mempool_free_memchunks(mp);
-	rte_ring_free(mp->ring);
+	rte_mempool_ops_free(mp);
 	rte_memzone_free(mp->mz);
 }
 
@@ -815,6 +784,20 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 		RTE_PTR_ADD(mp, MEMPOOL_HEADER_SIZE(mp, 0));
 
 	te->data = mp;
+
+	/*
+	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
+	 * set the correct index into the table of ops structs.
+	 */
+	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
+		rte_mempool_set_ops_byname(mp, "ring_sp_sc");
+	else if (flags & MEMPOOL_F_SP_PUT)
+		rte_mempool_set_ops_byname(mp, "ring_sp_mc");
+	else if (flags & MEMPOOL_F_SC_GET)
+		rte_mempool_set_ops_byname(mp, "ring_mp_sc");
+	else
+		rte_mempool_set_ops_byname(mp, "ring_mp_mc");
+
 	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
 	TAILQ_INSERT_TAIL(mempool_list, te, next);
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
@@ -930,7 +913,7 @@ rte_mempool_count(const struct rte_mempool *mp)
 	unsigned count;
 	unsigned lcore_id;
 
-	count = rte_ring_count(mp->ring);
+	count = rte_mempool_ops_get_count(mp);
 
 	if (mp->cache_size == 0)
 		return count;
@@ -1123,7 +1106,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 
 	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
 	fprintf(f, "  flags=%x\n", mp->flags);
-	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
+	fprintf(f, "  pool=%p\n", mp->pool_data);
 	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->mz->phys_addr);
 	fprintf(f, "  nb_mem_chunks=%u\n", mp->nb_mem_chunks);
 	fprintf(f, "  size=%"PRIu32"\n", mp->size);
@@ -1144,7 +1127,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 	}
 
 	cache_count = rte_mempool_dump_cache(f, mp);
-	common_count = rte_ring_count(mp->ring);
+	common_count = rte_mempool_ops_get_count(mp);
 	if ((cache_count + common_count) > mp->size)
 		common_count = mp->size - cache_count;
 	fprintf(f, "  common_pool_count=%u\n", common_count);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 60339bd..afc63f2 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -67,6 +67,7 @@
 #include <inttypes.h>
 #include <sys/queue.h>
 
+#include <rte_spinlock.h>
 #include <rte_log.h>
 #include <rte_debug.h>
 #include <rte_lcore.h>
@@ -203,10 +204,13 @@ struct rte_mempool_memhdr {
  */
 struct rte_mempool {
 	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
-	struct rte_ring *ring;           /**< Ring to store objects. */
+	union {
+		void *pool_data;         /**< Ring or pool to store objects */
+		uint64_t pool_id;        /**< External mempool identifier */
+	};
 	const struct rte_memzone *mz;    /**< Memzone where pool is allocated */
 	int flags;                       /**< Flags of the mempool. */
-	int socket_id;                   /**< Socket id passed at mempool creation. */
+	int socket_id;                   /**< Socket id passed at create */
 	uint32_t size;                   /**< Max size of the mempool. */
 	uint32_t cache_size;             /**< Size of per-lcore local cache. */
 	uint32_t cache_flushthresh;
@@ -217,6 +221,14 @@ struct rte_mempool {
 	uint32_t trailer_size;           /**< Size of trailer (after elt). */
 
 	unsigned private_data_size;      /**< Size of private data. */
+	/**
+	 * Index into rte_mempool_ops_table array of mempool ops
+	 * structs, which contain callback function pointers.
+	 * We're using an index here rather than pointers to the callbacks
+	 * to facilitate any secondary processes that may want to use
+	 * this mempool.
+	 */
+	int32_t ops_index;
 
 	struct rte_mempool_cache *local_cache; /**< Per-lcore local cache */
 
@@ -235,7 +247,7 @@ struct rte_mempool {
 #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
 #define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
-#define MEMPOOL_F_RING_CREATED   0x0010 /**< Internal: ring is created */
+#define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created */
 #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
 
 /**
@@ -325,6 +337,210 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 #define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
 #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
 
+#define RTE_MEMPOOL_OPS_NAMESIZE 32 /**< Max length of ops struct name. */
+
+/**
+ * Prototype for implementation specific data provisioning function.
+ *
+ * The function should provide the implementation specific memory for
+ * for use by the other mempool ops functions in a given mempool ops struct.
+ * E.g. the default ops provides an instance of the rte_ring for this purpose.
+ * it will mostlikely point to a different type of data structure, and
+ * will be transparent to the application programmer.
+ */
+typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp);
+
+/**
+ * Free the opaque private data pointed to by mp->pool_data pointer.
+ */
+typedef void (*rte_mempool_free_t)(struct rte_mempool *mp);
+
+/**
+ * Put an object in the external pool.
+ */
+typedef int (*rte_mempool_put_t)(struct rte_mempool *mp,
+		void * const *obj_table, unsigned int n);
+
+/**
+ * Get an object from the external pool.
+ */
+typedef int (*rte_mempool_get_t)(struct rte_mempool *mp,
+		void **obj_table, unsigned int n);
+
+/**
+ * Return the number of available objects in the external pool.
+ */
+typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
+
+/** Structure defining mempool operations structure */
+struct rte_mempool_ops {
+	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct */
+	rte_mempool_alloc_t alloc;       /**< Allocate private data */
+	rte_mempool_free_t free;         /**< Free the external pool. */
+	rte_mempool_put_t put;           /**< Put an object. */
+	rte_mempool_get_t get;           /**< Get an object. */
+	rte_mempool_get_count get_count; /**< Get the number of available objs. */
+} __rte_cache_aligned;
+
+#define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
+
+/**
+ * Structure storing the table of registered ops structs, each of which contain
+ * the function pointers for the mempool ops functions.
+ * Each process has its own storage for this ops struct array so that
+ * the mempools can be shared across primary and secondary processes.
+ * The indices used to access the array are valid across processes, whereas
+ * any function pointers stored directly in the mempool struct would not be.
+ * This results in us simply having "ops_index" in the mempool struct.
+ */
+struct rte_mempool_ops_table {
+	rte_spinlock_t sl;     /**< Spinlock for add/delete. */
+	uint32_t num_ops;      /**< Number of used ops structs in the table. */
+	/**
+	 * Storage for all possible ops structs.
+	 */
+	struct rte_mempool_ops ops[RTE_MEMPOOL_MAX_OPS_IDX];
+} __rte_cache_aligned;
+
+/** Array of registered ops structs */
+extern struct rte_mempool_ops_table rte_mempool_ops_table;
+
+/**
+ * @internal Get the mempool ops struct from its index.
+ *
+ * @param ops_index
+ *   The index of the ops struct in the ops struct table. It must be a valid
+ *   index: (0 <= idx < num_ops).
+ * @return
+ *   The pointer to the ops struct in the table.
+ */
+static inline struct rte_mempool_ops *
+rte_mempool_ops_get(int ops_index)
+{
+	RTE_VERIFY(ops_index < RTE_MEMPOOL_MAX_OPS_IDX);
+
+	return &rte_mempool_ops_table.ops[ops_index];
+}
+
+/**
+ * @internal Wrapper for mempool_ops alloc callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   The opaque pointer to the external pool.
+ */
+void *
+rte_mempool_ops_alloc(struct rte_mempool *mp);
+
+/**
+ * @internal Wrapper for mempool_ops get callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to get.
+ * @return
+ *   - 0: Success; got n objects.
+ *   - <0: Error; code of get function.
+ */
+static inline int
+rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
+		void **obj_table, unsigned n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->get(mp, obj_table, n);
+}
+
+/**
+ * @internal wrapper for mempool_ops put callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to put.
+ * @return
+ *   - 0: Success; n objects supplied.
+ *   - <0: Error; code of put function.
+ */
+static inline int
+rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->put(mp, obj_table, n);
+}
+
+/**
+ * @internal wrapper for mempool_ops get_count callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   The number of available objects in the external pool.
+ */
+unsigned
+rte_mempool_ops_get_count(const struct rte_mempool *mp);
+
+/**
+ * @internal wrapper for mempool_ops free callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ */
+void
+rte_mempool_ops_free(struct rte_mempool *mp);
+
+/**
+ * Set the ops of a mempool
+ *
+ * This can only be done on a mempool that is not populated, i.e. just after
+ * a call to rte_mempool_create_empty().
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param name
+ *   Name of the ops structure to use for this mempool.
+ * @return
+ *   - 0: Sucess; the mempool is now using the requested ops functions
+ *   - -EINVAL - Invalid ops struct name provided
+ *   - -EEXIST - mempool already has an ops struct assigned
+ */
+int
+rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name);
+
+/**
+ * Register mempool operations
+ *
+ * @param h
+ *   Pointer to and ops structure to register
+ * @return
+ *   - >=0: Sucess; return the index of the ops struct in the table.
+ *   - -EINVAL - some missing callbacks while registering ops struct
+ *   - -ENOSPC - the maximum number of ops structs has been reached
+ */
+int rte_mempool_ops_register(const struct rte_mempool_ops *ops);
+
+/**
+ * Macro to statically register the ops of an external mempool manager
+ * Note that the rte_mempool_ops_register fails silently here when
+ * more then RTE_MEMPOOL_MAX_OPS_IDX is registered.
+ */
+#define MEMPOOL_REGISTER_OPS(ops)					\
+	void mp_hdlr_init_##ops(void);					\
+	void __attribute__((constructor, used)) mp_hdlr_init_##ops(void)\
+	{								\
+		rte_mempool_ops_register(&ops);			\
+	}
+
 /**
  * An object callback function for mempool.
  *
@@ -774,7 +990,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 	cache->len += n;
 
 	if (cache->len >= flushthresh) {
-		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
+		rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache_size],
 				cache->len - cache_size);
 		cache->len = cache_size;
 	}
@@ -785,19 +1001,10 @@ ring_enqueue:
 
 	/* push remaining objects in ring */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	if (is_mp) {
-		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
-	else {
-		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
+	if (rte_mempool_ops_enqueue_bulk(mp, obj_table, n) < 0)
+		rte_panic("cannot put objects in mempool\n");
 #else
-	if (is_mp)
-		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
-	else
-		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
+	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);
 #endif
 }
 
@@ -945,7 +1152,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 		uint32_t req = n + (cache_size - cache->len);
 
 		/* How many do we require i.e. number to fill the cache + the request */
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
+		ret = rte_mempool_ops_dequeue_bulk(mp,
+			&cache->objs[cache->len], req);
 		if (unlikely(ret < 0)) {
 			/*
 			 * In the offchance that we are buffer constrained,
@@ -972,10 +1180,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 ring_dequeue:
 
 	/* get remaining objects from ring */
-	if (is_mc)
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
-	else
-		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
+	ret = rte_mempool_ops_dequeue_bulk(mp, obj_table, n);
 
 	if (ret < 0)
 		__MEMPOOL_STAT_ADD(mp, get_fail, n);
diff --git a/lib/librte_mempool/rte_mempool_default.c b/lib/librte_mempool/rte_mempool_default.c
new file mode 100644
index 0000000..d5451c9
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_default.c
@@ -0,0 +1,157 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+
+static int
+common_ring_mp_put(struct rte_mempool *mp, void * const *obj_table, unsigned n)
+{
+	return rte_ring_mp_enqueue_bulk((struct rte_ring *)(mp->pool_data),
+			obj_table, n);
+}
+
+static int
+common_ring_sp_put(struct rte_mempool *mp, void * const *obj_table, unsigned n)
+{
+	return rte_ring_sp_enqueue_bulk((struct rte_ring *)(mp->pool_data),
+		obj_table, n);
+}
+
+static int
+common_ring_mc_get(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	return rte_ring_mc_dequeue_bulk((struct rte_ring *)(mp->pool_data),
+		obj_table, n);
+}
+
+static int
+common_ring_sc_get(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	return rte_ring_sc_dequeue_bulk((struct rte_ring *)(mp->pool_data),
+		obj_table, n);
+}
+
+static unsigned
+common_ring_get_count(const struct rte_mempool *mp)
+{
+	return rte_ring_count((struct rte_ring *)(mp->pool_data));
+}
+
+
+static void *
+common_ring_alloc(struct rte_mempool *mp)
+{
+	int rg_flags = 0, ret;
+	char rg_name[RTE_RING_NAMESIZE];
+	struct rte_ring *r;
+
+	ret = snprintf(rg_name, sizeof(rg_name),
+		RTE_MEMPOOL_MZ_FORMAT, mp->name);
+	if (ret < 0 || ret >= (int)sizeof(rg_name)) {
+		rte_errno = ENAMETOOLONG;
+		return NULL;
+	}
+
+	/* ring flags */
+	if (mp->flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (mp->flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	/* Allocate the ring that will be used to store objects.
+	 * Ring functions will return appropriate errors if we are
+	 * running as a secondary process etc., so no checks made
+	 * in this function for that condition. */
+	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+		mp->socket_id, rg_flags);
+
+	return r;
+}
+
+static void
+common_ring_free(struct rte_mempool *mp)
+{
+	rte_ring_free((struct rte_ring *)mp->pool_data);
+}
+
+/*
+ * The following 4 declarations of mempool ops structs address
+ * the need for the backward compatible mempool managers for
+ * single/multi producers and single/multi consumers as dictated by the
+ * flags provided to the rte_mempool_create function
+ */
+static const struct rte_mempool_ops ops_mp_mc = {
+	.name = "ring_mp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.put = common_ring_mp_put,
+	.get = common_ring_mc_get,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_sp_sc = {
+	.name = "ring_sp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.put = common_ring_sp_put,
+	.get = common_ring_sc_get,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_mp_sc = {
+	.name = "ring_mp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.put = common_ring_mp_put,
+	.get = common_ring_sc_get,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_sp_mc = {
+	.name = "ring_sp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.put = common_ring_sp_put,
+	.get = common_ring_mc_get,
+	.get_count = common_ring_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(ops_mp_mc);
+MEMPOOL_REGISTER_OPS(ops_sp_sc);
+MEMPOOL_REGISTER_OPS(ops_mp_sc);
+MEMPOOL_REGISTER_OPS(ops_sp_mc);
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
new file mode 100644
index 0000000..2c47525
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -0,0 +1,149 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2016 6WIND S.A.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_mempool.h>
+#include <rte_errno.h>
+
+/* indirect jump table to support external memory pools */
+struct rte_mempool_ops_table rte_mempool_ops_table = {
+	.sl =  RTE_SPINLOCK_INITIALIZER ,
+	.num_ops = 0
+};
+
+/* add a new ops struct in rte_mempool_ops_table, return its index */
+int
+rte_mempool_ops_register(const struct rte_mempool_ops *h)
+{
+	struct rte_mempool_ops *ops;
+	int16_t ops_index;
+
+	rte_spinlock_lock(&rte_mempool_ops_table.sl);
+
+	if (rte_mempool_ops_table.num_ops >=
+			RTE_MEMPOOL_MAX_OPS_IDX) {
+		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Maximum number of mempool ops structs exceeded\n");
+		return -ENOSPC;
+	}
+
+	if (h->put == NULL || h->get == NULL || h->get_count == NULL) {
+		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Missing callback while registering mempool ops\n");
+		return -EINVAL;
+	}
+
+	if (strlen(h->name) >= sizeof(ops->name) - 1) {
+		RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n",
+				__func__, h->name);
+		rte_errno = EEXIST;
+		return NULL;
+	}
+
+	ops_index = rte_mempool_ops_table.num_ops++;
+	ops = &rte_mempool_ops_table.ops[ops_index];
+	snprintf(ops->name, sizeof(ops->name), "%s", h->name);
+	ops->alloc = h->alloc;
+	ops->put = h->put;
+	ops->get = h->get;
+	ops->get_count = h->get_count;
+
+	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+
+	return ops_index;
+}
+
+/* wrapper to allocate an external mempool's private (pool) data */
+void *
+rte_mempool_ops_alloc(struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	if (ops->alloc == NULL)
+		return NULL;
+	return ops->alloc(mp);
+}
+
+/* wrapper to free an external pool ops */
+void
+rte_mempool_ops_free(struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	if (ops->free == NULL)
+		return;
+	return ops->free(mp);
+}
+
+/* wrapper to get available objects in an external mempool */
+unsigned int
+rte_mempool_ops_get_count(const struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->get_count(mp);
+}
+
+/* sets mempool ops previously registered by rte_mempool_ops_register */
+int
+rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)
+{
+	struct rte_mempool_ops *ops = NULL;
+	unsigned i;
+
+	/* too late, the mempool is already populated */
+	if (mp->flags & MEMPOOL_F_POOL_CREATED)
+		return -EEXIST;
+
+	for (i = 0; i < rte_mempool_ops_table.num_ops; i++) {
+		if (!strcmp(name,
+				rte_mempool_ops_table.ops[i].name)) {
+			ops = &rte_mempool_ops_table.ops[i];
+			break;
+		}
+	}
+
+	if (ops == NULL)
+		return -EINVAL;
+
+	mp->ops_index = i;
+	return 0;
+}
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v8 2/3] app/test: test external mempool manager
  2016-06-03 14:58             ` [PATCH v8 0/5] mempool: add external mempool manager David Hunt
  2016-06-03 14:58               ` [PATCH v8 1/3] mempool: support external mempool operations David Hunt
@ 2016-06-03 14:58               ` David Hunt
  2016-06-03 14:58               ` [PATCH v8 3/3] mbuf: make default mempool ops configurable at build David Hunt
  2016-06-10 15:16               ` [PATCH v9 0/3] mempool: add external mempool manager David Hunt
  3 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-06-03 14:58 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, David Hunt

Use a minimal custom mempool external ops and check that it also
passes basic mempool autotests.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
 app/test/test_mempool.c | 114 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 114 insertions(+)

diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index b586249..8526670 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -83,6 +83,97 @@
 static rte_atomic32_t synchro;
 
 /*
+ * Simple example of custom mempool structure. Holds pointers to all the
+ * elements which are simply malloc'd in this example.
+ */
+struct custom_mempool {
+	rte_spinlock_t lock;
+	unsigned count;
+	unsigned size;
+	void *elts[];
+};
+
+/*
+ * Loop through all the element pointers and allocate a chunk of memory, then
+ * insert that memory into the ring.
+ */
+static void *
+custom_mempool_alloc(struct rte_mempool *mp)
+{
+	struct custom_mempool *cm;
+
+	cm = rte_zmalloc("custom_mempool",
+		sizeof(struct custom_mempool) + mp->size * sizeof(void *), 0);
+	if (cm == NULL)
+		return NULL;
+
+	rte_spinlock_init(&cm->lock);
+	cm->count = 0;
+	cm->size = mp->size;
+	return cm;
+}
+
+static void
+custom_mempool_free(struct rte_mempool *mp)
+{
+	rte_free((void *)(mp->pool_data));
+}
+
+static int
+custom_mempool_put(struct rte_mempool *mp, void * const *obj_table, unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (cm->count + n > cm->size) {
+		ret = -ENOBUFS;
+	} else {
+		memcpy(&cm->elts[cm->count], obj_table, sizeof(void *) * n);
+		cm->count += n;
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+
+static int
+custom_mempool_get(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (n > cm->count) {
+		ret = -ENOENT;
+	} else {
+		cm->count -= n;
+		memcpy(obj_table, &cm->elts[cm->count], sizeof(void *) * n);
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+static unsigned
+custom_mempool_get_count(const struct rte_mempool *mp)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+
+	return cm->count;
+}
+
+static struct rte_mempool_ops mempool_ops_custom = {
+	.name = "custom_handler",
+	.alloc = custom_mempool_alloc,
+	.free = custom_mempool_free,
+	.put = custom_mempool_put,
+	.get = custom_mempool_get,
+	.get_count = custom_mempool_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(mempool_ops_custom);
+
+/*
  * save the object number in the first 4 bytes of object data. All
  * other bytes are set to 0.
  */
@@ -477,6 +568,7 @@ test_mempool(void)
 {
 	struct rte_mempool *mp_cache = NULL;
 	struct rte_mempool *mp_nocache = NULL;
+	struct rte_mempool *mp_ext = NULL;
 
 	rte_atomic32_init(&synchro);
 
@@ -505,6 +597,27 @@ test_mempool(void)
 		goto err;
 	}
 
+	/* create a mempool with an external handler */
+	mp_ext = rte_mempool_create_empty("test_ext",
+		MEMPOOL_SIZE,
+		MEMPOOL_ELT_SIZE,
+		RTE_MEMPOOL_CACHE_MAX_SIZE, 0,
+		SOCKET_ID_ANY, 0);
+
+	if (mp_ext == NULL) {
+		printf("cannot allocate mp_ext mempool\n");
+		goto err;
+	}
+	if (rte_mempool_set_ops_byname(mp_ext, "custom_handler") < 0) {
+		printf("cannot set custom handler\n");
+		goto err;
+	}
+	if (rte_mempool_populate_default(mp_ext) < 0) {
+		printf("cannot populate mp_ext mempool\n");
+		goto err;
+	}
+	rte_mempool_obj_iter(mp_ext, my_obj_init, NULL);
+
 	/* retrieve the mempool from its name */
 	if (rte_mempool_lookup("test_nocache") != mp_nocache) {
 		printf("Cannot lookup mempool from its name\n");
@@ -545,6 +658,7 @@ test_mempool(void)
 err:
 	rte_mempool_free(mp_nocache);
 	rte_mempool_free(mp_cache);
+	rte_mempool_free(mp_ext);
 	return -1;
 }
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v8 3/3] mbuf: make default mempool ops configurable at build
  2016-06-03 14:58             ` [PATCH v8 0/5] mempool: add external mempool manager David Hunt
  2016-06-03 14:58               ` [PATCH v8 1/3] mempool: support external mempool operations David Hunt
  2016-06-03 14:58               ` [PATCH v8 2/3] app/test: test external mempool manager David Hunt
@ 2016-06-03 14:58               ` David Hunt
  2016-06-10 15:16               ` [PATCH v9 0/3] mempool: add external mempool manager David Hunt
  3 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-06-03 14:58 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, David Hunt

By default, the mempool ops used for mbuf allocations is a multi
producer and multi consumer ring. We could imagine a target (maybe some
network processors?) that provides an hardware-assisted pool
mechanism. In this case, the default configuration for this architecture
would contain a different value for RTE_MBUF_DEFAULT_MEMPOOL_OPS.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
 config/common_base         |  1 +
 lib/librte_mbuf/rte_mbuf.c | 26 ++++++++++++++++++++++----
 2 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/config/common_base b/config/common_base
index 47c26f6..899c038 100644
--- a/config/common_base
+++ b/config/common_base
@@ -394,6 +394,7 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
 #
 CONFIG_RTE_LIBRTE_MBUF=y
 CONFIG_RTE_LIBRTE_MBUF_DEBUG=n
+CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="ring_mp_mc"
 CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index eec1456..491230c 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -153,6 +153,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	unsigned cache_size, uint16_t priv_size, uint16_t data_room_size,
 	int socket_id)
 {
+	struct rte_mempool *mp;
 	struct rte_pktmbuf_pool_private mbp_priv;
 	unsigned elt_size;
 
@@ -167,10 +168,27 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	mbp_priv.mbuf_data_room_size = data_room_size;
 	mbp_priv.mbuf_priv_size = priv_size;
 
-	return rte_mempool_create(name, n, elt_size,
-		cache_size, sizeof(struct rte_pktmbuf_pool_private),
-		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
-		socket_id, 0);
+	mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
+		 sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
+	if (mp == NULL)
+		return NULL;
+
+	rte_errno = rte_mempool_set_ops_byname(mp,
+			RTE_MBUF_DEFAULT_MEMPOOL_OPS);
+	if (rte_errno != 0) {
+		RTE_LOG(ERR, MBUF, "error setting mempool handler\n");
+		return NULL;
+	}
+	rte_pktmbuf_pool_init(mp, &mbp_priv);
+
+	if (rte_mempool_populate_default(mp) < 0) {
+		rte_mempool_free(mp);
+		return NULL;
+	}
+
+	rte_mempool_obj_iter(mp, rte_pktmbuf_init, NULL);
+
+	return mp;
 }
 
 /* do some sanity checks on a mbuf: panic if it fails */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-03 14:58               ` [PATCH v8 1/3] mempool: support external mempool operations David Hunt
@ 2016-06-06 14:32                 ` Shreyansh Jain
  2016-06-06 14:38                 ` Shreyansh Jain
                                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 238+ messages in thread
From: Shreyansh Jain @ 2016-06-06 14:32 UTC (permalink / raw)
  To: David Hunt, dev; +Cc: olivier.matz, viktorin, jerin.jacob

Hi,

This is more of a question/clarification than a comment. (And I have taken only some snippets from original mail to keep it cleaner)

<snip>
> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
> +MEMPOOL_REGISTER_OPS(ops_sp_mc);
<snip>


<snip>
> +	/*
> +	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
> +	 * set the correct index into the table of ops structs.
> +	 */
> +	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> +		rte_mempool_set_ops_byname(mp, "ring_sp_sc");
> +	else if (flags & MEMPOOL_F_SP_PUT)
> +		rte_mempool_set_ops_byname(mp, "ring_sp_mc");
> +	else if (flags & MEMPOOL_F_SC_GET)
> +		rte_mempool_set_ops_byname(mp, "ring_mp_sc");
> +	else
> +		rte_mempool_set_ops_byname(mp, "ring_mp_mc");
> +

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-03 14:58               ` [PATCH v8 1/3] mempool: support external mempool operations David Hunt
  2016-06-06 14:32                 ` Shreyansh Jain
@ 2016-06-06 14:38                 ` Shreyansh Jain
  2016-06-07  9:25                   ` Hunt, David
  2016-06-07  9:05                 ` Shreyansh Jain
                                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 238+ messages in thread
From: Shreyansh Jain @ 2016-06-06 14:38 UTC (permalink / raw)
  To: David Hunt, dev; +Cc: olivier.matz, viktorin, jerin.jacob

Hi,

(Apologies for overly-eager email sent on this thread earlier. Will be more careful in future).

This is more of a question/clarification than a comment. (And I have taken only some snippets from original mail to keep it cleaner)

<snip>
> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
> +MEMPOOL_REGISTER_OPS(ops_sp_mc);
<snip>

>From the above what I understand is that multiple packet pool handlers can be created.

I have a use-case where application has multiple pools but only the packet pool is hardware backed. Using the hardware for general buffer requirements would prove costly.
>From what I understand from the patch, selection of the pool is based on the flags below.

<snip>
> +	/*
> +	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
> +	 * set the correct index into the table of ops structs.
> +	 */
> +	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> +		rte_mempool_set_ops_byname(mp, "ring_sp_sc");
> +	else if (flags & MEMPOOL_F_SP_PUT)
> +		rte_mempool_set_ops_byname(mp, "ring_sp_mc");
> +	else if (flags & MEMPOOL_F_SC_GET)
> +		rte_mempool_set_ops_byname(mp, "ring_mp_sc");
> +	else
> +		rte_mempool_set_ops_byname(mp, "ring_mp_mc");
> +

Is there any way I can achieve the above use case of multiple pools which can be selected by an application - something like a run-time toggle/flag?

-
Shreyansh

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-03 14:58               ` [PATCH v8 1/3] mempool: support external mempool operations David Hunt
  2016-06-06 14:32                 ` Shreyansh Jain
  2016-06-06 14:38                 ` Shreyansh Jain
@ 2016-06-07  9:05                 ` Shreyansh Jain
  2016-06-08 12:13                 ` Olivier Matz
  2016-06-08 14:28                 ` Shreyansh Jain
  4 siblings, 0 replies; 238+ messages in thread
From: Shreyansh Jain @ 2016-06-07  9:05 UTC (permalink / raw)
  To: David Hunt, dev; +Cc: olivier.matz, viktorin, jerin.jacob

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of David Hunt
> Sent: Friday, June 03, 2016 8:28 PM
> To: dev@dpdk.org
> Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
> jerin.jacob@caviumnetworks.com; David Hunt <david.hunt@intel.com>
> Subject: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> operations
> 

[...]

> +int
> +rte_mempool_ops_register(const struct rte_mempool_ops *h)
> +{
> +	struct rte_mempool_ops *ops;
> +	int16_t ops_index;
> +
> +	rte_spinlock_lock(&rte_mempool_ops_table.sl);
> +
> +	if (rte_mempool_ops_table.num_ops >=
> +			RTE_MEMPOOL_MAX_OPS_IDX) {
> +		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
> +		RTE_LOG(ERR, MEMPOOL,
> +			"Maximum number of mempool ops structs exceeded\n");
> +		return -ENOSPC;
> +	}
> +
> +	if (h->put == NULL || h->get == NULL || h->get_count == NULL) {
> +		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
> +		RTE_LOG(ERR, MEMPOOL,
> +			"Missing callback while registering mempool ops\n");
> +		return -EINVAL;
> +	}
> +
> +	if (strlen(h->name) >= sizeof(ops->name) - 1) {
> +		RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n",
> +				__func__, h->name);
> +		rte_errno = EEXIST;
> +		return NULL;

rte_mempool_ops_register has return type 'int'. Above should be 'return rte_errno;', or probably 'return -EEXIST;' itself.

> +	}
> +
> +	ops_index = rte_mempool_ops_table.num_ops++;
> +	ops = &rte_mempool_ops_table.ops[ops_index];
> +	snprintf(ops->name, sizeof(ops->name), "%s", h->name);
> +	ops->alloc = h->alloc;
> +	ops->put = h->put;
> +	ops->get = h->get;
> +	ops->get_count = h->get_count;
> +
> +	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
> +
> +	return ops_index;
> +}
> +
[...]

-
Shreyansh

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-06 14:38                 ` Shreyansh Jain
@ 2016-06-07  9:25                   ` Hunt, David
  2016-06-08 13:48                     ` Shreyansh Jain
  0 siblings, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-06-07  9:25 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: olivier.matz, viktorin, jerin.jacob

Hi Shreyansh,

On 6/6/2016 3:38 PM, Shreyansh Jain wrote:
> Hi,
>
> (Apologies for overly-eager email sent on this thread earlier. Will be more careful in future).
>
> This is more of a question/clarification than a comment. (And I have taken only some snippets from original mail to keep it cleaner)
>
> <snip>
>> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
>> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
>> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
>> +MEMPOOL_REGISTER_OPS(ops_sp_mc);
> <snip>
>
>  From the above what I understand is that multiple packet pool handlers can be created.
>
> I have a use-case where application has multiple pools but only the packet pool is hardware backed. Using the hardware for general buffer requirements would prove costly.
>  From what I understand from the patch, selection of the pool is based on the flags below.

The flags are only used to select one of the default handlers for 
backward compatibility through
the rte_mempool_create call. If you wish to use a mempool handler that 
is not one of the
defaults, (i.e. a new hardware handler), you would use the 
rte_create_mempool_empty
followed by the rte_mempool_set_ops_byname call.
So, for the external handlers, you create and empty mempool, then set 
the operations (ops)
for that particular mempool.

> <snip>
>> +	/*
>> +	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
>> +	 * set the correct index into the table of ops structs.
>> +	 */
>> +	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
>> +		rte_mempool_set_ops_byname(mp, "ring_sp_sc");
>> +	else if (flags & MEMPOOL_F_SP_PUT)
>> +		rte_mempool_set_ops_byname(mp, "ring_sp_mc");
>> +	else if (flags & MEMPOOL_F_SC_GET)
>> +		rte_mempool_set_ops_byname(mp, "ring_mp_sc");
>> +	else
>> +		rte_mempool_set_ops_byname(mp, "ring_mp_mc");
>> +
> Is there any way I can achieve the above use case of multiple pools which can be selected by an application - something like a run-time toggle/flag?
>
> -
> Shreyansh

Yes, you can create multiple pools, some using the default handlers, and 
some using external handlers.
There is an example of this in the autotests (app/test/test_mempool.c). 
This test creates multiple
mempools, of which one is a custom malloc based mempool handler. The 
test puts and gets mbufs
to/from each mempool all in the same application.

Regards,
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-03 14:58               ` [PATCH v8 1/3] mempool: support external mempool operations David Hunt
                                   ` (2 preceding siblings ...)
  2016-06-07  9:05                 ` Shreyansh Jain
@ 2016-06-08 12:13                 ` Olivier Matz
  2016-06-09 10:33                   ` Hunt, David
  2016-06-08 14:28                 ` Shreyansh Jain
  4 siblings, 1 reply; 238+ messages in thread
From: Olivier Matz @ 2016-06-08 12:13 UTC (permalink / raw)
  To: David Hunt, dev; +Cc: viktorin, jerin.jacob

Hi David,

Please find some comments below.

On 06/03/2016 04:58 PM, David Hunt wrote:

> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h

> +/**
> + * Prototype for implementation specific data provisioning function.
> + *
> + * The function should provide the implementation specific memory for
> + * for use by the other mempool ops functions in a given mempool ops struct.
> + * E.g. the default ops provides an instance of the rte_ring for this purpose.
> + * it will mostlikely point to a different type of data structure, and
> + * will be transparent to the application programmer.
> + */
> +typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp);

In http://dpdk.org/ml/archives/dev/2016-June/040233.html, I suggested
to change the prototype to return an int (-errno) and directly set
the set mp->pool_data (void *) or mp->pool_if (uint64_t). No cast
would be required in this latter case.

By the way, there is a typo in the comment:
"mostlikely" -> "most likely"

> --- /dev/null
> +++ b/lib/librte_mempool/rte_mempool_default.c

> +static void
> +common_ring_free(struct rte_mempool *mp)
> +{
> +	rte_ring_free((struct rte_ring *)mp->pool_data);
> +}

I don't think the cast is needed here.
(same in other functions)


> --- /dev/null
> +++ b/lib/librte_mempool/rte_mempool_ops.c

> +/* add a new ops struct in rte_mempool_ops_table, return its index */
> +int
> +rte_mempool_ops_register(const struct rte_mempool_ops *h)
> +{
> +	struct rte_mempool_ops *ops;
> +	int16_t ops_index;
> +
> +	rte_spinlock_lock(&rte_mempool_ops_table.sl);
> +
> +	if (rte_mempool_ops_table.num_ops >=
> +			RTE_MEMPOOL_MAX_OPS_IDX) {
> +		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
> +		RTE_LOG(ERR, MEMPOOL,
> +			"Maximum number of mempool ops structs exceeded\n");
> +		return -ENOSPC;
> +	}
> +
> +	if (h->put == NULL || h->get == NULL || h->get_count == NULL) {
> +		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
> +		RTE_LOG(ERR, MEMPOOL,
> +			"Missing callback while registering mempool ops\n");
> +		return -EINVAL;
> +	}
> +
> +	if (strlen(h->name) >= sizeof(ops->name) - 1) {
> +		RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n",
> +				__func__, h->name);
> +		rte_errno = EEXIST;
> +		return NULL;
> +	}

This has already been noticed by Shreyansh, but in case of:

rte_mempool_ops.c:75:10: error: return makes integer from pointer
without a cast [-Werror=int-conversion]
   return NULL;
          ^


> +/* sets mempool ops previously registered by rte_mempool_ops_register */
> +int
> +rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)


When I compile with shared libraries enabled, I get the following error:

librte_reorder.so: undefined reference to `rte_mempool_ops_table'
librte_mbuf.so: undefined reference to `rte_mempool_set_ops_byname'
...

The new functions and global variables must be in
rte_mempool_version.map. This was in v5
( http://dpdk.org/ml/archives/dev/2016-May/039365.html ) but
was dropped in v6.




Regards,
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-07  9:25                   ` Hunt, David
@ 2016-06-08 13:48                     ` Shreyansh Jain
  2016-06-09  9:39                       ` Hunt, David
  0 siblings, 1 reply; 238+ messages in thread
From: Shreyansh Jain @ 2016-06-08 13:48 UTC (permalink / raw)
  To: Hunt, David, dev; +Cc: olivier.matz, viktorin, jerin.jacob

Hi David,

Thanks for explanation. I have some comments inline...

> -----Original Message-----
> From: Hunt, David [mailto:david.hunt@intel.com]
> Sent: Tuesday, June 07, 2016 2:56 PM
> To: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org
> Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
> jerin.jacob@caviumnetworks.com
> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> operations
> 
> Hi Shreyansh,
> 
> On 6/6/2016 3:38 PM, Shreyansh Jain wrote:
> > Hi,
> >
> > (Apologies for overly-eager email sent on this thread earlier. Will be more
> careful in future).
> >
> > This is more of a question/clarification than a comment. (And I have taken
> only some snippets from original mail to keep it cleaner)
> >
> > <snip>
> >> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
> >> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
> >> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
> >> +MEMPOOL_REGISTER_OPS(ops_sp_mc);
> > <snip>
> >
> >  From the above what I understand is that multiple packet pool handlers can
> be created.
> >
> > I have a use-case where application has multiple pools but only the packet
> pool is hardware backed. Using the hardware for general buffer requirements
> would prove costly.
> >  From what I understand from the patch, selection of the pool is based on
> the flags below.
> 
> The flags are only used to select one of the default handlers for
> backward compatibility through
> the rte_mempool_create call. If you wish to use a mempool handler that
> is not one of the
> defaults, (i.e. a new hardware handler), you would use the
> rte_create_mempool_empty
> followed by the rte_mempool_set_ops_byname call.
> So, for the external handlers, you create and empty mempool, then set
> the operations (ops)
> for that particular mempool.

I am concerned about the existing applications (for example, l3fwd).
Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname' model would require modifications to these applications.
Ideally, without any modifications, these applications should be able to use packet pools (backed by hardware) and buffer pools (backed by ring/others) - transparently.

If I go by your suggestions, what I understand is, doing the above without modification to applications would be equivalent to:

  struct rte_mempool_ops custom_hw_allocator = {...}

thereafter, in config/common_base:

  CONFIG_RTE_DEFAULT_MEMPOOL_OPS="custom_hw_allocator"

calls to rte_pktmbuf_pool_create would use the new allocator.

But, another problem arises here.

There are two distinct paths for allocations of a memory pool:
1. A 'pkt' pool:
   rte_pktmbuf_pool_create   
     \- rte_mempool_create_empty
     |   \- rte_mempool_set_ops_byname(..ring_mp_mc..)
     |
     `- rte_mempool_set_ops_byname
           (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..)
           /* Override default 'ring_mp_mc' of
            * rte_mempool_create */

2. Through generic mempool create API
   rte_mempool_create
     \- rte_mempool_create_empty
           (passing pktmbuf and pool constructors)
  
I found various instances in example applications where rte_mempool_create() is being called directly for packet pools - bypassing the more semantically correct call to rte_pktmbuf_* for packet pools.

In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able to replace custom handler operations for packet buffer allocations.

>From a performance point-of-view, Applications should be able to select between packet pools and non-packet pools.

> 
> > <snip>
> >> +	/*
> >> +	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
> >> +	 * set the correct index into the table of ops structs.
> >> +	 */
> >> +	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> >> +		rte_mempool_set_ops_byname(mp, "ring_sp_sc");
> >> +	else if (flags & MEMPOOL_F_SP_PUT)
> >> +		rte_mempool_set_ops_byname(mp, "ring_sp_mc");
> >> +	else if (flags & MEMPOOL_F_SC_GET)
> >> +		rte_mempool_set_ops_byname(mp, "ring_mp_sc");
> >> +	else
> >> +		rte_mempool_set_ops_byname(mp, "ring_mp_mc");
> >> +

My suggestion is to have an additional flag, 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:

...
#define MEMPOOL_F_SC_GET    0x0008
#define MEMPOOL_F_PKT_ALLOC 0x0010
...

in rte_mempool_create_empty:
   ... after checking the other MEMPOOL_F_* flags...

    if (flags & MEMPOOL_F_PKT_ALLOC)
        rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS)

And removing the redundant call to rte_mempool_set_ops_byname() in rte_pktmbuf_create_pool().

Thereafter, rte_pktmbuf_pool_create can be changed to:

      ...
    mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
-        sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
+        sizeof(struct rte_pktmbuf_pool_private), socket_id,
+        MEMPOOL_F_PKT_ALLOC);
    if (mp == NULL)
        return NULL;

> > Is there any way I can achieve the above use case of multiple pools which
> can be selected by an application - something like a run-time toggle/flag?
> >
> > -
> > Shreyansh
> 
> Yes, you can create multiple pools, some using the default handlers, and
> some using external handlers.
> There is an example of this in the autotests (app/test/test_mempool.c).
> This test creates multiple
> mempools, of which one is a custom malloc based mempool handler. The
> test puts and gets mbufs
> to/from each mempool all in the same application.

Thanks for the explanation.

> 
> Regards,
> Dave.
> 

-
Shreyansh

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-03 14:58               ` [PATCH v8 1/3] mempool: support external mempool operations David Hunt
                                   ` (3 preceding siblings ...)
  2016-06-08 12:13                 ` Olivier Matz
@ 2016-06-08 14:28                 ` Shreyansh Jain
  4 siblings, 0 replies; 238+ messages in thread
From: Shreyansh Jain @ 2016-06-08 14:28 UTC (permalink / raw)
  To: David Hunt, dev; +Cc: olivier.matz, viktorin, jerin.jacob

Hi David,

Sorry for multiple mails on a patch. I forgot a trivial comment in previous mail.

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of David Hunt
> Sent: Friday, June 03, 2016 8:28 PM
> To: dev@dpdk.org
> Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
> jerin.jacob@caviumnetworks.com; David Hunt <david.hunt@intel.com>
> Subject: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> operations
> 
[...]
> +int
> +rte_mempool_ops_register(const struct rte_mempool_ops *h)
> +{
> +	struct rte_mempool_ops *ops;
> +	int16_t ops_index;
> +
> +	rte_spinlock_lock(&rte_mempool_ops_table.sl);
> +
> +	if (rte_mempool_ops_table.num_ops >=
> +			RTE_MEMPOOL_MAX_OPS_IDX) {
> +		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
> +		RTE_LOG(ERR, MEMPOOL,
> +			"Maximum number of mempool ops structs exceeded\n");
> +		return -ENOSPC;
> +	}
> +
> +	if (h->put == NULL || h->get == NULL || h->get_count == NULL) {

I think 'h->alloc' should also be checked here.

> +		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
> +		RTE_LOG(ERR, MEMPOOL,
> +			"Missing callback while registering mempool ops\n");
> +		return -EINVAL;
> +	}
> +
> +	if (strlen(h->name) >= sizeof(ops->name) - 1) {
> +		RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n",
> +				__func__, h->name);
> +		rte_errno = EEXIST;
> +		return NULL;
> +	}
> +
> +	ops_index = rte_mempool_ops_table.num_ops++;
> +	ops = &rte_mempool_ops_table.ops[ops_index];
> +	snprintf(ops->name, sizeof(ops->name), "%s", h->name);
> +	ops->alloc = h->alloc;
> +	ops->put = h->put;
> +	ops->get = h->get;
> +	ops->get_count = h->get_count;
> +
> +	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
> +
> +	return ops_index;
> +}
> +
[...]

-
Shreyansh

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-08 13:48                     ` Shreyansh Jain
@ 2016-06-09  9:39                       ` Hunt, David
  2016-06-09 10:31                         ` Jerin Jacob
                                           ` (2 more replies)
  0 siblings, 3 replies; 238+ messages in thread
From: Hunt, David @ 2016-06-09  9:39 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: olivier.matz, viktorin, jerin.jacob

Hi Shreyansh,

On 8/6/2016 2:48 PM, Shreyansh Jain wrote:
> Hi David,
>
> Thanks for explanation. I have some comments inline...
>
>> -----Original Message-----
>> From: Hunt, David [mailto:david.hunt@intel.com]
>> Sent: Tuesday, June 07, 2016 2:56 PM
>> To: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org
>> Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
>> jerin.jacob@caviumnetworks.com
>> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
>> operations
>>
>> Hi Shreyansh,
>>
>> On 6/6/2016 3:38 PM, Shreyansh Jain wrote:
>>> Hi,
>>>
>>> (Apologies for overly-eager email sent on this thread earlier. Will be more
>> careful in future).
>>> This is more of a question/clarification than a comment. (And I have taken
>> only some snippets from original mail to keep it cleaner)
>>> <snip>
>>>> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
>>>> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
>>>> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
>>>> +MEMPOOL_REGISTER_OPS(ops_sp_mc);
>>> <snip>
>>>
>>>   From the above what I understand is that multiple packet pool handlers can
>> be created.
>>> I have a use-case where application has multiple pools but only the packet
>> pool is hardware backed. Using the hardware for general buffer requirements
>> would prove costly.
>>>   From what I understand from the patch, selection of the pool is based on
>> the flags below.
>>
>> The flags are only used to select one of the default handlers for
>> backward compatibility through
>> the rte_mempool_create call. If you wish to use a mempool handler that
>> is not one of the
>> defaults, (i.e. a new hardware handler), you would use the
>> rte_create_mempool_empty
>> followed by the rte_mempool_set_ops_byname call.
>> So, for the external handlers, you create and empty mempool, then set
>> the operations (ops)
>> for that particular mempool.
> I am concerned about the existing applications (for example, l3fwd).
> Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname' model would require modifications to these applications.
> Ideally, without any modifications, these applications should be able to use packet pools (backed by hardware) and buffer pools (backed by ring/others) - transparently.
>
> If I go by your suggestions, what I understand is, doing the above without modification to applications would be equivalent to:
>
>    struct rte_mempool_ops custom_hw_allocator = {...}
>
> thereafter, in config/common_base:
>
>    CONFIG_RTE_DEFAULT_MEMPOOL_OPS="custom_hw_allocator"
>
> calls to rte_pktmbuf_pool_create would use the new allocator.

Yes, correct. But only for calls to rte_pktmbuf_pool_create(). Calls to 
rte_mempool_create will continue to use the default handlers (ring based).
> But, another problem arises here.
>
> There are two distinct paths for allocations of a memory pool:
> 1. A 'pkt' pool:
>     rte_pktmbuf_pool_create
>       \- rte_mempool_create_empty
>       |   \- rte_mempool_set_ops_byname(..ring_mp_mc..)
>       |
>       `- rte_mempool_set_ops_byname
>             (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..)
>             /* Override default 'ring_mp_mc' of
>              * rte_mempool_create */
>
> 2. Through generic mempool create API
>     rte_mempool_create
>       \- rte_mempool_create_empty
>             (passing pktmbuf and pool constructors)
>    
> I found various instances in example applications where rte_mempool_create() is being called directly for packet pools - bypassing the more semantically correct call to rte_pktmbuf_* for packet pools.
>
> In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able to replace custom handler operations for packet buffer allocations.
>
>  From a performance point-of-view, Applications should be able to select between packet pools and non-packet pools.

This is intended for backward compatibility, and API consistency. Any 
applications that use
rte_mempool_create directly will continue to use the default mempool 
handlers. If the need
to use a custeom hander, they will need to be modified to call the newer 
API,
rte_mempool_create_empty and rte_mempool_set_ops_byname.


>>> <snip>
>>>> +	/*
>>>> +	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
>>>> +	 * set the correct index into the table of ops structs.
>>>> +	 */
>>>> +	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
>>>> +		rte_mempool_set_ops_byname(mp, "ring_sp_sc");
>>>> +	else if (flags & MEMPOOL_F_SP_PUT)
>>>> +		rte_mempool_set_ops_byname(mp, "ring_sp_mc");
>>>> +	else if (flags & MEMPOOL_F_SC_GET)
>>>> +		rte_mempool_set_ops_byname(mp, "ring_mp_sc");
>>>> +	else
>>>> +		rte_mempool_set_ops_byname(mp, "ring_mp_mc");
>>>> +
> My suggestion is to have an additional flag, 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
>
> ...
> #define MEMPOOL_F_SC_GET    0x0008
> #define MEMPOOL_F_PKT_ALLOC 0x0010
> ...
>
> in rte_mempool_create_empty:
>     ... after checking the other MEMPOOL_F_* flags...
>
>      if (flags & MEMPOOL_F_PKT_ALLOC)
>          rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS)
>
> And removing the redundant call to rte_mempool_set_ops_byname() in rte_pktmbuf_create_pool().
>
> Thereafter, rte_pktmbuf_pool_create can be changed to:
>
>        ...
>      mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
> -        sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
> +        sizeof(struct rte_pktmbuf_pool_private), socket_id,
> +        MEMPOOL_F_PKT_ALLOC);
>      if (mp == NULL)
>          return NULL;

Yes, this would simplify somewhat the creation of a pktmbuf pool, in 
that it replaces
the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure we 
want
to introduce a third method of creating a mempool to the developers. If we
introduced this, we would then have:
1. rte_pktmbuf_pool_create()
2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would
    use the configured custom handler)
3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set followed
    by a call to rte_mempool_set_ops_byname() (would allow several 
different custom
    handlers to be used in one application

Does anyone else have an opinion on this? Oliver, Jerin, Jan?

Regards,
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-09  9:39                       ` Hunt, David
@ 2016-06-09 10:31                         ` Jerin Jacob
  2016-06-09 11:06                           ` Hunt, David
  2016-06-09 11:49                           ` Shreyansh Jain
  2016-06-09 11:41                         ` Shreyansh Jain
  2016-06-09 13:09                         ` Jan Viktorin
  2 siblings, 2 replies; 238+ messages in thread
From: Jerin Jacob @ 2016-06-09 10:31 UTC (permalink / raw)
  To: Hunt, David; +Cc: Shreyansh Jain, dev, olivier.matz, viktorin

On Thu, Jun 09, 2016 at 10:39:46AM +0100, Hunt, David wrote:
> Hi Shreyansh,
> 
> On 8/6/2016 2:48 PM, Shreyansh Jain wrote:
> > Hi David,
> > 
> > Thanks for explanation. I have some comments inline...
> > 
> > > -----Original Message-----
> > > From: Hunt, David [mailto:david.hunt@intel.com]
> > > Sent: Tuesday, June 07, 2016 2:56 PM
> > > To: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org
> > > Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
> > > jerin.jacob@caviumnetworks.com
> > > Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> > > operations
> > > 
> > > Hi Shreyansh,
> > > 
> > > On 6/6/2016 3:38 PM, Shreyansh Jain wrote:
> > > > Hi,
> > > > 
> > > > (Apologies for overly-eager email sent on this thread earlier. Will be more
> > > careful in future).
> > > > This is more of a question/clarification than a comment. (And I have taken
> > > only some snippets from original mail to keep it cleaner)
> > > > <snip>
> > > > > +MEMPOOL_REGISTER_OPS(ops_mp_mc);
> > > > > +MEMPOOL_REGISTER_OPS(ops_sp_sc);
> > > > > +MEMPOOL_REGISTER_OPS(ops_mp_sc);
> > > > > +MEMPOOL_REGISTER_OPS(ops_sp_mc);
> > > > <snip>
> > > > 
> > > >   From the above what I understand is that multiple packet pool handlers can
> > > be created.
> > > > I have a use-case where application has multiple pools but only the packet
> > > pool is hardware backed. Using the hardware for general buffer requirements
> > > would prove costly.
> > > >   From what I understand from the patch, selection of the pool is based on
> > > the flags below.
> > > 
> > > The flags are only used to select one of the default handlers for
> > > backward compatibility through
> > > the rte_mempool_create call. If you wish to use a mempool handler that
> > > is not one of the
> > > defaults, (i.e. a new hardware handler), you would use the
> > > rte_create_mempool_empty
> > > followed by the rte_mempool_set_ops_byname call.
> > > So, for the external handlers, you create and empty mempool, then set
> > > the operations (ops)
> > > for that particular mempool.
> > I am concerned about the existing applications (for example, l3fwd).
> > Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname' model would require modifications to these applications.
> > Ideally, without any modifications, these applications should be able to use packet pools (backed by hardware) and buffer pools (backed by ring/others) - transparently.
> > 
> > If I go by your suggestions, what I understand is, doing the above without modification to applications would be equivalent to:
> > 
> >    struct rte_mempool_ops custom_hw_allocator = {...}
> > 
> > thereafter, in config/common_base:
> > 
> >    CONFIG_RTE_DEFAULT_MEMPOOL_OPS="custom_hw_allocator"
> > 
> > calls to rte_pktmbuf_pool_create would use the new allocator.
> 
> Yes, correct. But only for calls to rte_pktmbuf_pool_create(). Calls to
> rte_mempool_create will continue to use the default handlers (ring based).
> > But, another problem arises here.
> > 
> > There are two distinct paths for allocations of a memory pool:
> > 1. A 'pkt' pool:
> >     rte_pktmbuf_pool_create
> >       \- rte_mempool_create_empty
> >       |   \- rte_mempool_set_ops_byname(..ring_mp_mc..)
> >       |
> >       `- rte_mempool_set_ops_byname
> >             (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..)
> >             /* Override default 'ring_mp_mc' of
> >              * rte_mempool_create */
> > 
> > 2. Through generic mempool create API
> >     rte_mempool_create
> >       \- rte_mempool_create_empty
> >             (passing pktmbuf and pool constructors)
> > I found various instances in example applications where rte_mempool_create() is being called directly for packet pools - bypassing the more semantically correct call to rte_pktmbuf_* for packet pools.
> > 
> > In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able to replace custom handler operations for packet buffer allocations.
> > 
> >  From a performance point-of-view, Applications should be able to select between packet pools and non-packet pools.
> 
> This is intended for backward compatibility, and API consistency. Any
> applications that use
> rte_mempool_create directly will continue to use the default mempool
> handlers. If the need
> to use a custeom hander, they will need to be modified to call the newer
> API,
> rte_mempool_create_empty and rte_mempool_set_ops_byname.
> 
> 
> > > > <snip>
> > > > > +	/*
> > > > > +	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
> > > > > +	 * set the correct index into the table of ops structs.
> > > > > +	 */
> > > > > +	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> > > > > +		rte_mempool_set_ops_byname(mp, "ring_sp_sc");
> > > > > +	else if (flags & MEMPOOL_F_SP_PUT)
> > > > > +		rte_mempool_set_ops_byname(mp, "ring_sp_mc");
> > > > > +	else if (flags & MEMPOOL_F_SC_GET)
> > > > > +		rte_mempool_set_ops_byname(mp, "ring_mp_sc");
> > > > > +	else
> > > > > +		rte_mempool_set_ops_byname(mp, "ring_mp_mc");
> > > > > +
> > My suggestion is to have an additional flag, 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
> > 
> > ...
> > #define MEMPOOL_F_SC_GET    0x0008
> > #define MEMPOOL_F_PKT_ALLOC 0x0010
> > ...
> > 
> > in rte_mempool_create_empty:
> >     ... after checking the other MEMPOOL_F_* flags...
> > 
> >      if (flags & MEMPOOL_F_PKT_ALLOC)
> >          rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS)
> > 
> > And removing the redundant call to rte_mempool_set_ops_byname() in rte_pktmbuf_create_pool().
> > 
> > Thereafter, rte_pktmbuf_pool_create can be changed to:
> > 
> >        ...
> >      mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
> > -        sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
> > +        sizeof(struct rte_pktmbuf_pool_private), socket_id,
> > +        MEMPOOL_F_PKT_ALLOC);
> >      if (mp == NULL)
> >          return NULL;
> 
> Yes, this would simplify somewhat the creation of a pktmbuf pool, in that it
> replaces
> the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure we
> want
> to introduce a third method of creating a mempool to the developers. If we
> introduced this, we would then have:
> 1. rte_pktmbuf_pool_create()
> 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would
>    use the configured custom handler)
> 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set followed
>    by a call to rte_mempool_set_ops_byname() (would allow several different
> custom
>    handlers to be used in one application
> 
> Does anyone else have an opinion on this? Oliver, Jerin, Jan?

As I mentioned earlier, My take is not to create the separate API's for
external mempool handlers.In my view, It's same,  just that sepreate
mempool handler through function pointers.

To keep the backward compatibility, I think we can extend the flags
in rte_mempool_create and have a single API external/internal pool
creation(makes easy for existing applications too, add a just mempool
flag command line argument to existing applications to choose the
mempool handler)

Jerin

> 
> Regards,
> Dave.
> 
> 

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-08 12:13                 ` Olivier Matz
@ 2016-06-09 10:33                   ` Hunt, David
  0 siblings, 0 replies; 238+ messages in thread
From: Hunt, David @ 2016-06-09 10:33 UTC (permalink / raw)
  To: Olivier Matz, dev; +Cc: viktorin, jerin.jacob

Hi Olivier,

On 8/6/2016 1:13 PM, Olivier Matz wrote:
> Hi David,
>
> Please find some comments below.
>
> On 06/03/2016 04:58 PM, David Hunt wrote:
>
>> --- a/lib/librte_mempool/rte_mempool.h
>> +++ b/lib/librte_mempool/rte_mempool.h
>> +/**
>> + * Prototype for implementation specific data provisioning function.
>> + *
>> + * The function should provide the implementation specific memory for
>> + * for use by the other mempool ops functions in a given mempool ops struct.
>> + * E.g. the default ops provides an instance of the rte_ring for this purpose.
>> + * it will mostlikely point to a different type of data structure, and
>> + * will be transparent to the application programmer.
>> + */
>> +typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp);
> In http://dpdk.org/ml/archives/dev/2016-June/040233.html, I suggested
> to change the prototype to return an int (-errno) and directly set
> the set mp->pool_data (void *) or mp->pool_if (uint64_t). No cast
> would be required in this latter case.

Done.

> By the way, there is a typo in the comment:
> "mostlikely" -> "most likely"

Fixed.

>> --- /dev/null
>> +++ b/lib/librte_mempool/rte_mempool_default.c
>> +static void
>> +common_ring_free(struct rte_mempool *mp)
>> +{
>> +	rte_ring_free((struct rte_ring *)mp->pool_data);
>> +}
> I don't think the cast is needed here.
> (same in other functions)

Removed.

>
>> --- /dev/null
>> +++ b/lib/librte_mempool/rte_mempool_ops.c
>> +/* add a new ops struct in rte_mempool_ops_table, return its index */
>> +int
>> +rte_mempool_ops_register(const struct rte_mempool_ops *h)
>> +{
>> +	struct rte_mempool_ops *ops;
>> +	int16_t ops_index;
>> +
>> +	rte_spinlock_lock(&rte_mempool_ops_table.sl);
>> +
>> +	if (rte_mempool_ops_table.num_ops >=
>> +			RTE_MEMPOOL_MAX_OPS_IDX) {
>> +		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
>> +		RTE_LOG(ERR, MEMPOOL,
>> +			"Maximum number of mempool ops structs exceeded\n");
>> +		return -ENOSPC;
>> +	}
>> +
>> +	if (h->put == NULL || h->get == NULL || h->get_count == NULL) {
>> +		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
>> +		RTE_LOG(ERR, MEMPOOL,
>> +			"Missing callback while registering mempool ops\n");
>> +		return -EINVAL;
>> +	}
>> +
>> +	if (strlen(h->name) >= sizeof(ops->name) - 1) {
>> +		RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n",
>> +				__func__, h->name);
>> +		rte_errno = EEXIST;
>> +		return NULL;
>> +	}
> This has already been noticed by Shreyansh, but in case of:
>
> rte_mempool_ops.c:75:10: error: return makes integer from pointer
> without a cast [-Werror=int-conversion]
>     return NULL;
>            ^

Changed to return -EEXIST

>
>> +/* sets mempool ops previously registered by rte_mempool_ops_register */
>> +int
>> +rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)
>
> When I compile with shared libraries enabled, I get the following error:
>
> librte_reorder.so: undefined reference to `rte_mempool_ops_table'
> librte_mbuf.so: undefined reference to `rte_mempool_set_ops_byname'
> ...
>
> The new functions and global variables must be in
> rte_mempool_version.map. This was in v5
> ( http://dpdk.org/ml/archives/dev/2016-May/039365.html ) but
> was dropped in v6.

OK, Added.

>
> Regards,
> Olivier

Thanks,
David.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-09 10:31                         ` Jerin Jacob
@ 2016-06-09 11:06                           ` Hunt, David
  2016-06-09 11:49                           ` Shreyansh Jain
  1 sibling, 0 replies; 238+ messages in thread
From: Hunt, David @ 2016-06-09 11:06 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: Shreyansh Jain, dev, olivier.matz, viktorin



On 9/6/2016 11:31 AM, Jerin Jacob wrote:
> On Thu, Jun 09, 2016 at 10:39:46AM +0100, Hunt, David wrote:
>> Hi Shreyansh,
>>
>> On 8/6/2016 2:48 PM, Shreyansh Jain wrote:
>>> Hi David,
>>>
>>> Thanks for explanation. I have some comments inline...
>>>
>>>> -----Original Message-----
>>>> From: Hunt, David [mailto:david.hunt@intel.com]
>>>> Sent: Tuesday, June 07, 2016 2:56 PM
>>>> To: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org
>>>> Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
>>>> jerin.jacob@caviumnetworks.com
>>>> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
>>>> operations
>>>>
>>>> Hi Shreyansh,
>>>>
>>>> On 6/6/2016 3:38 PM, Shreyansh Jain wrote:
>>>>> Hi,
>>>>>
>>>>> (Apologies for overly-eager email sent on this thread earlier. Will be more
>>>> careful in future).
>>>>> This is more of a question/clarification than a comment. (And I have taken
>>>> only some snippets from original mail to keep it cleaner)
>>>>> <snip>
>>>>>> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
>>>>>> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
>>>>>> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
>>>>>> +MEMPOOL_REGISTER_OPS(ops_sp_mc);
>>>>> <snip>
>>>>>
>>>>>    From the above what I understand is that multiple packet pool handlers can
>>>> be created.
>>>>> I have a use-case where application has multiple pools but only the packet
>>>> pool is hardware backed. Using the hardware for general buffer requirements
>>>> would prove costly.
>>>>>    From what I understand from the patch, selection of the pool is based on
>>>> the flags below.
>>>>
>>>> The flags are only used to select one of the default handlers for
>>>> backward compatibility through
>>>> the rte_mempool_create call. If you wish to use a mempool handler that
>>>> is not one of the
>>>> defaults, (i.e. a new hardware handler), you would use the
>>>> rte_create_mempool_empty
>>>> followed by the rte_mempool_set_ops_byname call.
>>>> So, for the external handlers, you create and empty mempool, then set
>>>> the operations (ops)
>>>> for that particular mempool.
>>> I am concerned about the existing applications (for example, l3fwd).
>>> Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname' model would require modifications to these applications.
>>> Ideally, without any modifications, these applications should be able to use packet pools (backed by hardware) and buffer pools (backed by ring/others) - transparently.
>>>
>>> If I go by your suggestions, what I understand is, doing the above without modification to applications would be equivalent to:
>>>
>>>     struct rte_mempool_ops custom_hw_allocator = {...}
>>>
>>> thereafter, in config/common_base:
>>>
>>>     CONFIG_RTE_DEFAULT_MEMPOOL_OPS="custom_hw_allocator"
>>>
>>> calls to rte_pktmbuf_pool_create would use the new allocator.
>> Yes, correct. But only for calls to rte_pktmbuf_pool_create(). Calls to
>> rte_mempool_create will continue to use the default handlers (ring based).
>>> But, another problem arises here.
>>>
>>> There are two distinct paths for allocations of a memory pool:
>>> 1. A 'pkt' pool:
>>>      rte_pktmbuf_pool_create
>>>        \- rte_mempool_create_empty
>>>        |   \- rte_mempool_set_ops_byname(..ring_mp_mc..)
>>>        |
>>>        `- rte_mempool_set_ops_byname
>>>              (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..)
>>>              /* Override default 'ring_mp_mc' of
>>>               * rte_mempool_create */
>>>
>>> 2. Through generic mempool create API
>>>      rte_mempool_create
>>>        \- rte_mempool_create_empty
>>>              (passing pktmbuf and pool constructors)
>>> I found various instances in example applications where rte_mempool_create() is being called directly for packet pools - bypassing the more semantically correct call to rte_pktmbuf_* for packet pools.
>>>
>>> In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able to replace custom handler operations for packet buffer allocations.
>>>
>>>   From a performance point-of-view, Applications should be able to select between packet pools and non-packet pools.
>> This is intended for backward compatibility, and API consistency. Any
>> applications that use
>> rte_mempool_create directly will continue to use the default mempool
>> handlers. If the need
>> to use a custeom hander, they will need to be modified to call the newer
>> API,
>> rte_mempool_create_empty and rte_mempool_set_ops_byname.
>>
>>
>>>>> <snip>
>>>>>> +	/*
>>>>>> +	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
>>>>>> +	 * set the correct index into the table of ops structs.
>>>>>> +	 */
>>>>>> +	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
>>>>>> +		rte_mempool_set_ops_byname(mp, "ring_sp_sc");
>>>>>> +	else if (flags & MEMPOOL_F_SP_PUT)
>>>>>> +		rte_mempool_set_ops_byname(mp, "ring_sp_mc");
>>>>>> +	else if (flags & MEMPOOL_F_SC_GET)
>>>>>> +		rte_mempool_set_ops_byname(mp, "ring_mp_sc");
>>>>>> +	else
>>>>>> +		rte_mempool_set_ops_byname(mp, "ring_mp_mc");
>>>>>> +
>>> My suggestion is to have an additional flag, 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
>>>
>>> ...
>>> #define MEMPOOL_F_SC_GET    0x0008
>>> #define MEMPOOL_F_PKT_ALLOC 0x0010
>>> ...
>>>
>>> in rte_mempool_create_empty:
>>>      ... after checking the other MEMPOOL_F_* flags...
>>>
>>>       if (flags & MEMPOOL_F_PKT_ALLOC)
>>>           rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS)
>>>
>>> And removing the redundant call to rte_mempool_set_ops_byname() in rte_pktmbuf_create_pool().
>>>
>>> Thereafter, rte_pktmbuf_pool_create can be changed to:
>>>
>>>         ...
>>>       mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
>>> -        sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
>>> +        sizeof(struct rte_pktmbuf_pool_private), socket_id,
>>> +        MEMPOOL_F_PKT_ALLOC);
>>>       if (mp == NULL)
>>>           return NULL;
>> Yes, this would simplify somewhat the creation of a pktmbuf pool, in that it
>> replaces
>> the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure we
>> want
>> to introduce a third method of creating a mempool to the developers. If we
>> introduced this, we would then have:
>> 1. rte_pktmbuf_pool_create()
>> 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would
>>     use the configured custom handler)
>> 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set followed
>>     by a call to rte_mempool_set_ops_byname() (would allow several different
>> custom
>>     handlers to be used in one application
>>
>> Does anyone else have an opinion on this? Oliver, Jerin, Jan?
> As I mentioned earlier, My take is not to create the separate API's for
> external mempool handlers.In my view, It's same,  just that sepreate
> mempool handler through function pointers.
>
> To keep the backward compatibility, I think we can extend the flags
> in rte_mempool_create and have a single API external/internal pool
> creation(makes easy for existing applications too, add a just mempool
> flag command line argument to existing applications to choose the
> mempool handler)
>
> Jerin

Would a good compromise be what Shreyansh is suggesting, adding
just 1 bit into the flags to allow the rte_mempool_create use the mempool
handler as defined in CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS?

That way we wouldn't have the complexity of the previously discussed
proposed solution of parsing out the sections of the flags bits
into bytes to get the handler index requested, and we wouldn't have to 
query by
name to get the index of the handler we're interested in. It's just a
simple "use the configured custom handler or not" bit.

This way, applications can set the bit to use the custom handler
defined in the config, but still have the flexibility using the other 
new API calls
to have several handlers in use in an application.

Regards,
David.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-09  9:39                       ` Hunt, David
  2016-06-09 10:31                         ` Jerin Jacob
@ 2016-06-09 11:41                         ` Shreyansh Jain
  2016-06-09 12:55                           ` Hunt, David
  2016-06-09 13:09                         ` Jan Viktorin
  2 siblings, 1 reply; 238+ messages in thread
From: Shreyansh Jain @ 2016-06-09 11:41 UTC (permalink / raw)
  To: Hunt, David, dev; +Cc: olivier.matz, viktorin, jerin.jacob

Hi David,

> -----Original Message-----
> From: Hunt, David [mailto:david.hunt@intel.com]
> Sent: Thursday, June 09, 2016 3:10 PM
> To: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org
> Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
> jerin.jacob@caviumnetworks.com
> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> operations
> 
> Hi Shreyansh,
> 
> On 8/6/2016 2:48 PM, Shreyansh Jain wrote:
> > Hi David,
> >
> > Thanks for explanation. I have some comments inline...
> >
> >> -----Original Message-----
> >> From: Hunt, David [mailto:david.hunt@intel.com]
> >> Sent: Tuesday, June 07, 2016 2:56 PM
> >> To: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org
> >> Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
> >> jerin.jacob@caviumnetworks.com
> >> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> >> operations
> >>
> >> Hi Shreyansh,
> >>
> >> On 6/6/2016 3:38 PM, Shreyansh Jain wrote:
> >>> Hi,
> >>>
> >>> (Apologies for overly-eager email sent on this thread earlier. Will be
> more
> >> careful in future).
> >>> This is more of a question/clarification than a comment. (And I have
> taken
> >> only some snippets from original mail to keep it cleaner)
> >>> <snip>
> >>>> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
> >>>> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
> >>>> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
> >>>> +MEMPOOL_REGISTER_OPS(ops_sp_mc);
> >>> <snip>
> >>>
> >>>   From the above what I understand is that multiple packet pool handlers
> can
> >> be created.
> >>> I have a use-case where application has multiple pools but only the
> packet
> >> pool is hardware backed. Using the hardware for general buffer
> requirements
> >> would prove costly.
> >>>   From what I understand from the patch, selection of the pool is based
> on
> >> the flags below.
> >>
> >> The flags are only used to select one of the default handlers for
> >> backward compatibility through
> >> the rte_mempool_create call. If you wish to use a mempool handler that
> >> is not one of the
> >> defaults, (i.e. a new hardware handler), you would use the
> >> rte_create_mempool_empty
> >> followed by the rte_mempool_set_ops_byname call.
> >> So, for the external handlers, you create and empty mempool, then set
> >> the operations (ops)
> >> for that particular mempool.
> > I am concerned about the existing applications (for example, l3fwd).
> > Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname'
> model would require modifications to these applications.
> > Ideally, without any modifications, these applications should be able to
> use packet pools (backed by hardware) and buffer pools (backed by
> ring/others) - transparently.
> >
> > If I go by your suggestions, what I understand is, doing the above without
> modification to applications would be equivalent to:
> >
> >    struct rte_mempool_ops custom_hw_allocator = {...}
> >
> > thereafter, in config/common_base:
> >
> >    CONFIG_RTE_DEFAULT_MEMPOOL_OPS="custom_hw_allocator"
> >
> > calls to rte_pktmbuf_pool_create would use the new allocator.
> 
> Yes, correct. But only for calls to rte_pktmbuf_pool_create(). Calls to
> rte_mempool_create will continue to use the default handlers (ring based).

Agree with you.
But, some applications continue to use rte_mempool_create for allocating packet pools. Thus, even with a custom handler available (which, most probably, would be a hardware packet buffer handler), application would unintentionally end up not using it.
Probably, such applications should be changed? (e.g. pipeline). 

> > But, another problem arises here.
> >
> > There are two distinct paths for allocations of a memory pool:
> > 1. A 'pkt' pool:
> >     rte_pktmbuf_pool_create
> >       \- rte_mempool_create_empty
> >       |   \- rte_mempool_set_ops_byname(..ring_mp_mc..)
> >       |
> >       `- rte_mempool_set_ops_byname
> >             (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..)
> >             /* Override default 'ring_mp_mc' of
> >              * rte_mempool_create */
> >
> > 2. Through generic mempool create API
> >     rte_mempool_create
> >       \- rte_mempool_create_empty
> >             (passing pktmbuf and pool constructors)
> >
> > I found various instances in example applications where
> rte_mempool_create() is being called directly for packet pools - bypassing
> the more semantically correct call to rte_pktmbuf_* for packet pools.
> >
> > In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able to
> replace custom handler operations for packet buffer allocations.
> >
> >  From a performance point-of-view, Applications should be able to select
> between packet pools and non-packet pools.
> 
> This is intended for backward compatibility, and API consistency. Any
> applications that use
> rte_mempool_create directly will continue to use the default mempool
> handlers. If the need
> to use a custeom hander, they will need to be modified to call the newer
> API,
> rte_mempool_create_empty and rte_mempool_set_ops_byname.

My understanding was that applications should be oblivious of how their pools are managed, except that they do understand packet pools should be faster (or accelerated) than non-packet pools.
(Of course, some applications may be designed to explicitly take advantage of an available handler through rte_mempool_create_empty=>rte_mempool_set_ops_byname calls.)

In that perspective, I was expecting that applications should be calling:
 -> rte_pktmbuf_* for all packet relation operations
 -> rte_mempool_* for non-packet or explicit hardware handlers

And leave rest of the mempool handler related magic to DPDK framework.

> 
> 
> >>> <snip>
> >>>> +	/*
> >>>> +	 * Since we have 4 combinations of the SP/SC/MP/MC examine the
> flags to
> >>>> +	 * set the correct index into the table of ops structs.
> >>>> +	 */
> >>>> +	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> >>>> +		rte_mempool_set_ops_byname(mp, "ring_sp_sc");
> >>>> +	else if (flags & MEMPOOL_F_SP_PUT)
> >>>> +		rte_mempool_set_ops_byname(mp, "ring_sp_mc");
> >>>> +	else if (flags & MEMPOOL_F_SC_GET)
> >>>> +		rte_mempool_set_ops_byname(mp, "ring_mp_sc");
> >>>> +	else
> >>>> +		rte_mempool_set_ops_byname(mp, "ring_mp_mc");
> >>>> +
> > My suggestion is to have an additional flag, 'MEMPOOL_F_PKT_ALLOC', which,
> if specified, would:

I read through some previous discussions and realized that something similar [1] had already been proposed earlier.
I didn't want to hijack this thread with an old discussions - it was unintentional.

[1] http://article.gmane.org/gmane.comp.networking.dpdk.devel/39803

But, [1] would make the distinction of *type* of pool and its corresponding handler, whether default or external/custom, quite clear.

> >
> > ...
> > #define MEMPOOL_F_SC_GET    0x0008
> > #define MEMPOOL_F_PKT_ALLOC 0x0010
> > ...
> >
> > in rte_mempool_create_empty:
> >     ... after checking the other MEMPOOL_F_* flags...
> >
> >      if (flags & MEMPOOL_F_PKT_ALLOC)
> >          rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS)
> >
> > And removing the redundant call to rte_mempool_set_ops_byname() in
> rte_pktmbuf_create_pool().
> >
> > Thereafter, rte_pktmbuf_pool_create can be changed to:
> >
> >        ...
> >      mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
> > -        sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
> > +        sizeof(struct rte_pktmbuf_pool_private), socket_id,
> > +        MEMPOOL_F_PKT_ALLOC);
> >      if (mp == NULL)
> >          return NULL;
> 
> Yes, this would simplify somewhat the creation of a pktmbuf pool, in
> that it replaces
> the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure we
> want
> to introduce a third method of creating a mempool to the developers. If we
> introduced this, we would then have:
> 1. rte_pktmbuf_pool_create()
> 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would
>     use the configured custom handler)
> 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set followed
>     by a call to rte_mempool_set_ops_byname() (would allow several
> different custom
>     handlers to be used in one application
> 
> Does anyone else have an opinion on this? Oliver, Jerin, Jan?
> 
> Regards,
> Dave.
> 

-
Shreyansh

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-09 10:31                         ` Jerin Jacob
  2016-06-09 11:06                           ` Hunt, David
@ 2016-06-09 11:49                           ` Shreyansh Jain
  2016-06-09 12:30                             ` Jerin Jacob
  1 sibling, 1 reply; 238+ messages in thread
From: Shreyansh Jain @ 2016-06-09 11:49 UTC (permalink / raw)
  To: Jerin Jacob, Hunt, David; +Cc: dev, olivier.matz, viktorin

Hi Jerin,

> -----Original Message-----
> From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> Sent: Thursday, June 09, 2016 4:02 PM
> To: Hunt, David <david.hunt@intel.com>
> Cc: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org;
> olivier.matz@6wind.com; viktorin@rehivetech.com
> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> operations
> 
> On Thu, Jun 09, 2016 at 10:39:46AM +0100, Hunt, David wrote:
> > Hi Shreyansh,
> >
> > On 8/6/2016 2:48 PM, Shreyansh Jain wrote:
> > > Hi David,
> > >
> > > Thanks for explanation. I have some comments inline...
> > >
> > > > -----Original Message-----
> > > > From: Hunt, David [mailto:david.hunt@intel.com]
> > > > Sent: Tuesday, June 07, 2016 2:56 PM
> > > > To: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org
> > > > Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
> > > > jerin.jacob@caviumnetworks.com
> > > > Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external
> mempool
> > > > operations
> > > >
> > > > Hi Shreyansh,
> > > >
> > > > On 6/6/2016 3:38 PM, Shreyansh Jain wrote:
> > > > > Hi,
> > > > >
> > > > > (Apologies for overly-eager email sent on this thread earlier. Will
> be more
> > > > careful in future).
> > > > > This is more of a question/clarification than a comment. (And I have
> taken
> > > > only some snippets from original mail to keep it cleaner)
> > > > > <snip>
> > > > > > +MEMPOOL_REGISTER_OPS(ops_mp_mc);
> > > > > > +MEMPOOL_REGISTER_OPS(ops_sp_sc);
> > > > > > +MEMPOOL_REGISTER_OPS(ops_mp_sc);
> > > > > > +MEMPOOL_REGISTER_OPS(ops_sp_mc);
> > > > > <snip>
> > > > >
> > > > >   From the above what I understand is that multiple packet pool
> handlers can
> > > > be created.
> > > > > I have a use-case where application has multiple pools but only the
> packet
> > > > pool is hardware backed. Using the hardware for general buffer
> requirements
> > > > would prove costly.
> > > > >   From what I understand from the patch, selection of the pool is
> based on
> > > > the flags below.
> > > >
> > > > The flags are only used to select one of the default handlers for
> > > > backward compatibility through
> > > > the rte_mempool_create call. If you wish to use a mempool handler that
> > > > is not one of the
> > > > defaults, (i.e. a new hardware handler), you would use the
> > > > rte_create_mempool_empty
> > > > followed by the rte_mempool_set_ops_byname call.
> > > > So, for the external handlers, you create and empty mempool, then set
> > > > the operations (ops)
> > > > for that particular mempool.
> > > I am concerned about the existing applications (for example, l3fwd).
> > > Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname'
> model would require modifications to these applications.
> > > Ideally, without any modifications, these applications should be able to
> use packet pools (backed by hardware) and buffer pools (backed by
> ring/others) - transparently.
> > >
> > > If I go by your suggestions, what I understand is, doing the above
> without modification to applications would be equivalent to:
> > >
> > >    struct rte_mempool_ops custom_hw_allocator = {...}
> > >
> > > thereafter, in config/common_base:
> > >
> > >    CONFIG_RTE_DEFAULT_MEMPOOL_OPS="custom_hw_allocator"
> > >
> > > calls to rte_pktmbuf_pool_create would use the new allocator.
> >
> > Yes, correct. But only for calls to rte_pktmbuf_pool_create(). Calls to
> > rte_mempool_create will continue to use the default handlers (ring based).
> > > But, another problem arises here.
> > >
> > > There are two distinct paths for allocations of a memory pool:
> > > 1. A 'pkt' pool:
> > >     rte_pktmbuf_pool_create
> > >       \- rte_mempool_create_empty
> > >       |   \- rte_mempool_set_ops_byname(..ring_mp_mc..)
> > >       |
> > >       `- rte_mempool_set_ops_byname
> > >             (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..)
> > >             /* Override default 'ring_mp_mc' of
> > >              * rte_mempool_create */
> > >
> > > 2. Through generic mempool create API
> > >     rte_mempool_create
> > >       \- rte_mempool_create_empty
> > >             (passing pktmbuf and pool constructors)
> > > I found various instances in example applications where
> rte_mempool_create() is being called directly for packet pools - bypassing
> the more semantically correct call to rte_pktmbuf_* for packet pools.
> > >
> > > In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able to
> replace custom handler operations for packet buffer allocations.
> > >
> > >  From a performance point-of-view, Applications should be able to select
> between packet pools and non-packet pools.
> >
> > This is intended for backward compatibility, and API consistency. Any
> > applications that use
> > rte_mempool_create directly will continue to use the default mempool
> > handlers. If the need
> > to use a custeom hander, they will need to be modified to call the newer
> > API,
> > rte_mempool_create_empty and rte_mempool_set_ops_byname.
> >
> >
> > > > > <snip>
> > > > > > +	/*
> > > > > > +	 * Since we have 4 combinations of the SP/SC/MP/MC examine the
> flags to
> > > > > > +	 * set the correct index into the table of ops structs.
> > > > > > +	 */
> > > > > > +	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> > > > > > +		rte_mempool_set_ops_byname(mp, "ring_sp_sc");
> > > > > > +	else if (flags & MEMPOOL_F_SP_PUT)
> > > > > > +		rte_mempool_set_ops_byname(mp, "ring_sp_mc");
> > > > > > +	else if (flags & MEMPOOL_F_SC_GET)
> > > > > > +		rte_mempool_set_ops_byname(mp, "ring_mp_sc");
> > > > > > +	else
> > > > > > +		rte_mempool_set_ops_byname(mp, "ring_mp_mc");
> > > > > > +
> > > My suggestion is to have an additional flag, 'MEMPOOL_F_PKT_ALLOC',
> which, if specified, would:
> > >
> > > ...
> > > #define MEMPOOL_F_SC_GET    0x0008
> > > #define MEMPOOL_F_PKT_ALLOC 0x0010
> > > ...
> > >
> > > in rte_mempool_create_empty:
> > >     ... after checking the other MEMPOOL_F_* flags...
> > >
> > >      if (flags & MEMPOOL_F_PKT_ALLOC)
> > >          rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS)
> > >
> > > And removing the redundant call to rte_mempool_set_ops_byname() in
> rte_pktmbuf_create_pool().
> > >
> > > Thereafter, rte_pktmbuf_pool_create can be changed to:
> > >
> > >        ...
> > >      mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
> > > -        sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
> > > +        sizeof(struct rte_pktmbuf_pool_private), socket_id,
> > > +        MEMPOOL_F_PKT_ALLOC);
> > >      if (mp == NULL)
> > >          return NULL;
> >
> > Yes, this would simplify somewhat the creation of a pktmbuf pool, in that
> it
> > replaces
> > the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure we
> > want
> > to introduce a third method of creating a mempool to the developers. If we
> > introduced this, we would then have:
> > 1. rte_pktmbuf_pool_create()
> > 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would
> >    use the configured custom handler)
> > 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set followed
> >    by a call to rte_mempool_set_ops_byname() (would allow several different
> > custom
> >    handlers to be used in one application
> >
> > Does anyone else have an opinion on this? Oliver, Jerin, Jan?
> 
> As I mentioned earlier, My take is not to create the separate API's for
> external mempool handlers.In my view, It's same,  just that sepreate
> mempool handler through function pointers.
> 
> To keep the backward compatibility, I think we can extend the flags
> in rte_mempool_create and have a single API external/internal pool
> creation(makes easy for existing applications too, add a just mempool
> flag command line argument to existing applications to choose the
> mempool handler)

May be I am interpreting it wrong, but, are you suggesting a single mempool handler for all buffer/packet needs of an application (passed as command line argument)?
That would be inefficient especially for cases where pool is backed by a hardware. The application wouldn't want its generic buffers to consume hardware resource which would be better used for packets.

I was hoping that external mempool handler would help segregate such use-cases and allow applications to tap into accelerated packet processors.

Do correct me if I my understanding is wrong.

> 
> Jerin
> 
> >
> > Regards,
> > Dave.
> >
> >

-
Shreyansh

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-09 11:49                           ` Shreyansh Jain
@ 2016-06-09 12:30                             ` Jerin Jacob
  2016-06-09 13:03                               ` Shreyansh Jain
  2016-06-09 13:18                               ` Hunt, David
  0 siblings, 2 replies; 238+ messages in thread
From: Jerin Jacob @ 2016-06-09 12:30 UTC (permalink / raw)
  To: Shreyansh Jain; +Cc: Hunt, David, dev, olivier.matz, viktorin

On Thu, Jun 09, 2016 at 11:49:44AM +0000, Shreyansh Jain wrote:
> Hi Jerin,

Hi Shreyansh,

> 
> > > Yes, this would simplify somewhat the creation of a pktmbuf pool, in that
> > it
> > > replaces
> > > the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure we
> > > want
> > > to introduce a third method of creating a mempool to the developers. If we
> > > introduced this, we would then have:
> > > 1. rte_pktmbuf_pool_create()
> > > 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would
> > >    use the configured custom handler)
> > > 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set followed
> > >    by a call to rte_mempool_set_ops_byname() (would allow several different
> > > custom
> > >    handlers to be used in one application
> > >
> > > Does anyone else have an opinion on this? Oliver, Jerin, Jan?
> > 
> > As I mentioned earlier, My take is not to create the separate API's for
> > external mempool handlers.In my view, It's same,  just that sepreate
> > mempool handler through function pointers.
> > 
> > To keep the backward compatibility, I think we can extend the flags
> > in rte_mempool_create and have a single API external/internal pool
> > creation(makes easy for existing applications too, add a just mempool
> > flag command line argument to existing applications to choose the
> > mempool handler)
> 
> May be I am interpreting it wrong, but, are you suggesting a single mempool handler for all buffer/packet needs of an application (passed as command line argument)?
> That would be inefficient especially for cases where pool is backed by a hardware. The application wouldn't want its generic buffers to consume hardware resource which would be better used for packets.

It may vary from platform to platform or particular use case. For instance,
the HW external pool manager for generic buffers may scale better than SW multi
producers/multi-consumer implementation when the number of cores > N
(as no locking involved in enqueue/dequeue(again it is depended on
specific HW implementation))

I thought their no harm in selecting the external pool handlers
in root level itself(rte_mempool_create) as by default it is
SW MP/MC and it just an option to override if the application wants it.

Jerin


> 
> I was hoping that external mempool handler would help segregate such use-cases and allow applications to tap into accelerated packet processors.
> 
> Do correct me if I my understanding is wrong.
> 
> > 
> > Jerin
> > 
> > >
> > > Regards,
> > > Dave.
> > >
> > >
> 
> -
> Shreyansh

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-09 11:41                         ` Shreyansh Jain
@ 2016-06-09 12:55                           ` Hunt, David
  0 siblings, 0 replies; 238+ messages in thread
From: Hunt, David @ 2016-06-09 12:55 UTC (permalink / raw)
  To: Shreyansh Jain, dev; +Cc: olivier.matz, viktorin, jerin.jacob



On 9/6/2016 12:41 PM, Shreyansh Jain wrote:
> Hi David,
>
>> -----Original Message-----
>> From: Hunt, David [mailto:david.hunt@intel.com]
>> Sent: Thursday, June 09, 2016 3:10 PM
>> To: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org
>> Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
>> jerin.jacob@caviumnetworks.com
>> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
>> operations
>>
>> Hi Shreyansh,
>>
>> On 8/6/2016 2:48 PM, Shreyansh Jain wrote:
>>> Hi David,
>>>
>>> Thanks for explanation. I have some comments inline...
>>>
>>>> -----Original Message-----
>>>> From: Hunt, David [mailto:david.hunt@intel.com]
>>>> Sent: Tuesday, June 07, 2016 2:56 PM
>>>> To: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org
>>>> Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
>>>> jerin.jacob@caviumnetworks.com
>>>> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
>>>> operations
>>>>
>>>> Hi Shreyansh,
>>>>
>>>> On 6/6/2016 3:38 PM, Shreyansh Jain wrote:
>>>>> Hi,
>>>>>
>>>>> (Apologies for overly-eager email sent on this thread earlier. Will be
>> more
>>>> careful in future).
>>>>> This is more of a question/clarification than a comment. (And I have
>> taken
>>>> only some snippets from original mail to keep it cleaner)
>>>>> <snip>
>>>>>> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
>>>>>> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
>>>>>> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
>>>>>> +MEMPOOL_REGISTER_OPS(ops_sp_mc);
>>>>> <snip>
>>>>>
>>>>>    From the above what I understand is that multiple packet pool handlers
>> can
>>>> be created.
>>>>> I have a use-case where application has multiple pools but only the
>> packet
>>>> pool is hardware backed. Using the hardware for general buffer
>> requirements
>>>> would prove costly.
>>>>>    From what I understand from the patch, selection of the pool is based
>> on
>>>> the flags below.
>>>>
>>>> The flags are only used to select one of the default handlers for
>>>> backward compatibility through
>>>> the rte_mempool_create call. If you wish to use a mempool handler that
>>>> is not one of the
>>>> defaults, (i.e. a new hardware handler), you would use the
>>>> rte_create_mempool_empty
>>>> followed by the rte_mempool_set_ops_byname call.
>>>> So, for the external handlers, you create and empty mempool, then set
>>>> the operations (ops)
>>>> for that particular mempool.
>>> I am concerned about the existing applications (for example, l3fwd).
>>> Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname'
>> model would require modifications to these applications.
>>> Ideally, without any modifications, these applications should be able to use packet pools (backed by hardware) and buffer pools (backed by
>>> ring/others) - transparently.
>>> If I go by your suggestions, what I understand is, doing the above without modification to applications would be equivalent to:
>>>     struct rte_mempool_ops custom_hw_allocator = {...}
>>>
>>> thereafter, in config/common_base:
>>>
>>>     CONFIG_RTE_DEFAULT_MEMPOOL_OPS="custom_hw_allocator"
>>>
>>> calls to rte_pktmbuf_pool_create would use the new allocator.
>> Yes, correct. But only for calls to rte_pktmbuf_pool_create(). Calls to
>> rte_mempool_create will continue to use the default handlers (ring based).
> Agree with you.
> But, some applications continue to use rte_mempool_create for allocating packet pools. Thus, even with a custom handler available (which, most probably, would be a hardware packet buffer handler), application would unintentionally end up not using it.
> Probably, such applications should be changed? (e.g. pipeline).

Yes, agreed.  If those applications need to use external mempool 
handlers, then they should be changed. I would see that as outside the 
scope of this patchset.


>>> But, another problem arises here.
>>>
>>> There are two distinct paths for allocations of a memory pool:
>>> 1. A 'pkt' pool:
>>>      rte_pktmbuf_pool_create
>>>        \- rte_mempool_create_empty
>>>        |   \- rte_mempool_set_ops_byname(..ring_mp_mc..)
>>>        |
>>>        `- rte_mempool_set_ops_byname
>>>              (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..)
>>>              /* Override default 'ring_mp_mc' of
>>>               * rte_mempool_create */
>>>
>>> 2. Through generic mempool create API
>>>      rte_mempool_create
>>>        \- rte_mempool_create_empty
>>>              (passing pktmbuf and pool constructors)
>>>
>>> I found various instances in example applications where
>> rte_mempool_create() is being called directly for packet pools - bypassing
>> the more semantically correct call to rte_pktmbuf_* for packet pools.
>>> In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able to
>> replace custom handler operations for packet buffer allocations.
>>>   From a performance point-of-view, Applications should be able to select
>> between packet pools and non-packet pools.
>>
>> This is intended for backward compatibility, and API consistency. Any
>> applications that use
>> rte_mempool_create directly will continue to use the default mempool
>> handlers. If the need
>> to use a custeom hander, they will need to be modified to call the newer
>> API,
>> rte_mempool_create_empty and rte_mempool_set_ops_byname.
> My understanding was that applications should be oblivious of how their pools are managed, except that they do understand packet pools should be faster (or accelerated) than non-packet pools.
> (Of course, some applications may be designed to explicitly take advantage of an available handler through rte_mempool_create_empty=>rte_mempool_set_ops_byname calls.)

> In that perspective, I was expecting that applications should be calling:
>   -> rte_pktmbuf_* for all packet relation operations
>   -> rte_mempool_* for non-packet or explicit hardware handlers
>
> And leave rest of the mempool handler related magic to DPDK framework.

I think there is still some work on the applications side to know 
whether to use external or internal (default) handler for
particular mempools.

I'll propose something based on the further comments on the other part 
of this thread based on feedback so far.

Regards,
Dave.








>
>>
>>>>> <snip>
>>>>>> +	/*
>>>>>> +	 * Since we have 4 combinations of the SP/SC/MP/MC examine the
>> flags to
>>>>>> +	 * set the correct index into the table of ops structs.
>>>>>> +	 */
>>>>>> +	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
>>>>>> +		rte_mempool_set_ops_byname(mp, "ring_sp_sc");
>>>>>> +	else if (flags & MEMPOOL_F_SP_PUT)
>>>>>> +		rte_mempool_set_ops_byname(mp, "ring_sp_mc");
>>>>>> +	else if (flags & MEMPOOL_F_SC_GET)
>>>>>> +		rte_mempool_set_ops_byname(mp, "ring_mp_sc");
>>>>>> +	else
>>>>>> +		rte_mempool_set_ops_byname(mp, "ring_mp_mc");
>>>>>> +
>>> My suggestion is to have an additional flag, 'MEMPOOL_F_PKT_ALLOC', which,
>> if specified, would:
> I read through some previous discussions and realized that something similar [1] had already been proposed earlier.
> I didn't want to hijack this thread with an old discussions - it was unintentional.
>
> [1] http://article.gmane.org/gmane.comp.networking.dpdk.devel/39803
>
> But, [1] would make the distinction of *type* of pool and its corresponding handler, whether default or external/custom, quite clear.

I can incorporate a bit in the flags (MEMPOOL_F_PKT_ALLOC) as you 
suggest, which would allow the rte_mempool_create calls to use a custom 
handler

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-09 12:30                             ` Jerin Jacob
@ 2016-06-09 13:03                               ` Shreyansh Jain
  2016-06-09 13:18                               ` Hunt, David
  1 sibling, 0 replies; 238+ messages in thread
From: Shreyansh Jain @ 2016-06-09 13:03 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: Hunt, David, dev, olivier.matz, viktorin

Hi Jerin,

> -----Original Message-----
> From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> Sent: Thursday, June 09, 2016 6:01 PM
> To: Shreyansh Jain <shreyansh.jain@nxp.com>
> Cc: Hunt, David <david.hunt@intel.com>; dev@dpdk.org; olivier.matz@6wind.com;
> viktorin@rehivetech.com
> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> operations
> 
> On Thu, Jun 09, 2016 at 11:49:44AM +0000, Shreyansh Jain wrote:
> > Hi Jerin,
> 
> Hi Shreyansh,
> 
> >
> > > > Yes, this would simplify somewhat the creation of a pktmbuf pool, in
> that
> > > it
> > > > replaces
> > > > the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure
> we
> > > > want
> > > > to introduce a third method of creating a mempool to the developers. If
> we
> > > > introduced this, we would then have:
> > > > 1. rte_pktmbuf_pool_create()
> > > > 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would
> > > >    use the configured custom handler)
> > > > 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set
> followed
> > > >    by a call to rte_mempool_set_ops_byname() (would allow several
> different
> > > > custom
> > > >    handlers to be used in one application
> > > >
> > > > Does anyone else have an opinion on this? Oliver, Jerin, Jan?
> > >
> > > As I mentioned earlier, My take is not to create the separate API's for
> > > external mempool handlers.In my view, It's same,  just that sepreate
> > > mempool handler through function pointers.
> > >
> > > To keep the backward compatibility, I think we can extend the flags
> > > in rte_mempool_create and have a single API external/internal pool
> > > creation(makes easy for existing applications too, add a just mempool
> > > flag command line argument to existing applications to choose the
> > > mempool handler)
> >
> > May be I am interpreting it wrong, but, are you suggesting a single mempool
> handler for all buffer/packet needs of an application (passed as command line
> argument)?
> > That would be inefficient especially for cases where pool is backed by a
> hardware. The application wouldn't want its generic buffers to consume
> hardware resource which would be better used for packets.
> 
> It may vary from platform to platform or particular use case. For instance,
> the HW external pool manager for generic buffers may scale better than SW
> multi
> producers/multi-consumer implementation when the number of cores > N
> (as no locking involved in enqueue/dequeue(again it is depended on
> specific HW implementation))

I agree with you that above cases would exist.

But, even in these cases I think it would be application's prerogative to decide whether it would like its buffers to be managed by a hardware allocator or SW [SM]p/[SM]c implementations. Probably, in this case the application would call the rte_mempool_*(PKT_POOL) for generic buffers as well (or maybe a dedicated buffer pool flag) - just as an example.

> 
> I thought their no harm in selecting the external pool handlers
> in root level itself(rte_mempool_create) as by default it is
> SW MP/MC and it just an option to override if the application wants it.

It sounds fine if calls to rte_mempool_* can select an external handler *optionally* - but, if we pass it as command line, it would be binding (at least, semantically) for rte_pktmbuf_* calls as well. Isn't it?

[Probably, I am still unclear how it would remain 'optional' in command line case you suggested.]

> 
> Jerin
> 
> 
[...]

-
Shreyansh

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-09  9:39                       ` Hunt, David
  2016-06-09 10:31                         ` Jerin Jacob
  2016-06-09 11:41                         ` Shreyansh Jain
@ 2016-06-09 13:09                         ` Jan Viktorin
  2016-06-10  7:29                           ` Olivier Matz
  2 siblings, 1 reply; 238+ messages in thread
From: Jan Viktorin @ 2016-06-09 13:09 UTC (permalink / raw)
  To: Hunt, David; +Cc: Shreyansh Jain, dev, olivier.matz, jerin.jacob

On Thu, 9 Jun 2016 10:39:46 +0100
"Hunt, David" <david.hunt@intel.com> wrote:

> Hi Shreyansh,
> 
> On 8/6/2016 2:48 PM, Shreyansh Jain wrote:
> > Hi David,
> >
> > Thanks for explanation. I have some comments inline...
> >  
> >> -----Original Message-----
> >> From: Hunt, David [mailto:david.hunt@intel.com]
> >> Sent: Tuesday, June 07, 2016 2:56 PM
> >> To: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org
> >> Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
> >> jerin.jacob@caviumnetworks.com
> >> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> >> operations
> >>
> >> Hi Shreyansh,
> >>
> >> On 6/6/2016 3:38 PM, Shreyansh Jain wrote:  
> >>> Hi,
> >>>
> >>> (Apologies for overly-eager email sent on this thread earlier. Will be more  
> >> careful in future).  
> >>> This is more of a question/clarification than a comment. (And I have taken  
> >> only some snippets from original mail to keep it cleaner)  
> >>> <snip>  
> >>>> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
> >>>> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
> >>>> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
> >>>> +MEMPOOL_REGISTER_OPS(ops_sp_mc);  
> >>> <snip>
> >>>
> >>>   From the above what I understand is that multiple packet pool handlers can  
> >> be created.  
> >>> I have a use-case where application has multiple pools but only the packet  
> >> pool is hardware backed. Using the hardware for general buffer requirements
> >> would prove costly.  
> >>>   From what I understand from the patch, selection of the pool is based on  
> >> the flags below.
> >>
> >> The flags are only used to select one of the default handlers for
> >> backward compatibility through
> >> the rte_mempool_create call. If you wish to use a mempool handler that
> >> is not one of the
> >> defaults, (i.e. a new hardware handler), you would use the
> >> rte_create_mempool_empty
> >> followed by the rte_mempool_set_ops_byname call.
> >> So, for the external handlers, you create and empty mempool, then set
> >> the operations (ops)
> >> for that particular mempool.  
> > I am concerned about the existing applications (for example, l3fwd).
> > Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname' model would require modifications to these applications.
> > Ideally, without any modifications, these applications should be able to use packet pools (backed by hardware) and buffer pools (backed by ring/others) - transparently.
> >
> > If I go by your suggestions, what I understand is, doing the above without modification to applications would be equivalent to:
> >
> >    struct rte_mempool_ops custom_hw_allocator = {...}
> >
> > thereafter, in config/common_base:
> >
> >    CONFIG_RTE_DEFAULT_MEMPOOL_OPS="custom_hw_allocator"
> >
> > calls to rte_pktmbuf_pool_create would use the new allocator.  
> 
> Yes, correct. But only for calls to rte_pktmbuf_pool_create(). Calls to 
> rte_mempool_create will continue to use the default handlers (ring based).
> > But, another problem arises here.
> >
> > There are two distinct paths for allocations of a memory pool:
> > 1. A 'pkt' pool:
> >     rte_pktmbuf_pool_create
> >       \- rte_mempool_create_empty
> >       |   \- rte_mempool_set_ops_byname(..ring_mp_mc..)
> >       |
> >       `- rte_mempool_set_ops_byname
> >             (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..)
> >             /* Override default 'ring_mp_mc' of
> >              * rte_mempool_create */
> >
> > 2. Through generic mempool create API
> >     rte_mempool_create
> >       \- rte_mempool_create_empty
> >             (passing pktmbuf and pool constructors)
> >    
> > I found various instances in example applications where rte_mempool_create() is being called directly for packet pools - bypassing the more semantically correct call to rte_pktmbuf_* for packet pools.
> >
> > In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able to replace custom handler operations for packet buffer allocations.
> >
> >  From a performance point-of-view, Applications should be able to select between packet pools and non-packet pools.  
> 
> This is intended for backward compatibility, and API consistency. Any 
> applications that use
> rte_mempool_create directly will continue to use the default mempool 
> handlers. If the need
> to use a custeom hander, they will need to be modified to call the newer 
> API,
> rte_mempool_create_empty and rte_mempool_set_ops_byname.
> 
> 
> >>> <snip>  
> >>>> +	/*
> >>>> +	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
> >>>> +	 * set the correct index into the table of ops structs.
> >>>> +	 */
> >>>> +	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> >>>> +		rte_mempool_set_ops_byname(mp, "ring_sp_sc");
> >>>> +	else if (flags & MEMPOOL_F_SP_PUT)
> >>>> +		rte_mempool_set_ops_byname(mp, "ring_sp_mc");
> >>>> +	else if (flags & MEMPOOL_F_SC_GET)
> >>>> +		rte_mempool_set_ops_byname(mp, "ring_mp_sc");
> >>>> +	else
> >>>> +		rte_mempool_set_ops_byname(mp, "ring_mp_mc");
> >>>> +  
> > My suggestion is to have an additional flag, 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
> >
> > ...
> > #define MEMPOOL_F_SC_GET    0x0008
> > #define MEMPOOL_F_PKT_ALLOC 0x0010
> > ...
> >
> > in rte_mempool_create_empty:
> >     ... after checking the other MEMPOOL_F_* flags...
> >
> >      if (flags & MEMPOOL_F_PKT_ALLOC)
> >          rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS)
> >
> > And removing the redundant call to rte_mempool_set_ops_byname() in rte_pktmbuf_create_pool().
> >
> > Thereafter, rte_pktmbuf_pool_create can be changed to:
> >
> >        ...
> >      mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
> > -        sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
> > +        sizeof(struct rte_pktmbuf_pool_private), socket_id,
> > +        MEMPOOL_F_PKT_ALLOC);
> >      if (mp == NULL)
> >          return NULL;  
> 
> Yes, this would simplify somewhat the creation of a pktmbuf pool, in 
> that it replaces
> the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure we 
> want
> to introduce a third method of creating a mempool to the developers. If we
> introduced this, we would then have:
> 1. rte_pktmbuf_pool_create()
> 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would
>     use the configured custom handler)
> 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set followed
>     by a call to rte_mempool_set_ops_byname() (would allow several 
> different custom
>     handlers to be used in one application
> 
> Does anyone else have an opinion on this? Oliver, Jerin, Jan?

I am quite careful about this topic as I don't feel to be very involved in all the
use cases. My opinion is that the _new API_ should be able to cover all cases and
the _old API_ should be backwards compatible, however, built on top of the _new API_.

I.e. I think, the flags MEMPOOL_F_SP_PUT, MEMPOOL_F_SC_GET (relicts of the old API)
should be accepted by the old API ONLY. The rte_mempool_create_empty should not process
them. Similarly for a potential MEMPOOL_F_PKT_ALLOC, I would not polute the rte_mempool_create_empty
by this anymore.

In overall we would get exactly 2 approaches (and not more):

* using rte_mempool_create with flags calling the rte_mempool_create_empty and
  rte_mempool_set_ops_byname internally (so this layer can be marked as deprecated
  and removed in the future)

* using rte_mempool_create_empty + rte_mempool_set_ops_byname - allowing any customizations
  but with the necessity to change the applications (new preferred API)

So, the old applications can stay as they are (OK, with a possible new flag
MEMPOOL_F_PKT_ALLOC) and the new one can do the same but you have to set the ops
explicitly.

The more different ways of using those APIs we have, the greater hell we have to maintain.

Regards
Jan

> 
> Regards,
> Dave.
> 
> 



-- 
   Jan Viktorin                  E-mail: Viktorin@RehiveTech.com
   System Architect              Web:    www.RehiveTech.com
   RehiveTech
   Brno, Czech Republic

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-09 12:30                             ` Jerin Jacob
  2016-06-09 13:03                               ` Shreyansh Jain
@ 2016-06-09 13:18                               ` Hunt, David
  2016-06-09 13:37                                 ` Jerin Jacob
  1 sibling, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-06-09 13:18 UTC (permalink / raw)
  To: Jerin Jacob, Shreyansh Jain; +Cc: dev, olivier.matz, viktorin



On 9/6/2016 1:30 PM, Jerin Jacob wrote:
> On Thu, Jun 09, 2016 at 11:49:44AM +0000, Shreyansh Jain wrote:
>> Hi Jerin,
> Hi Shreyansh,
>
>>>> Yes, this would simplify somewhat the creation of a pktmbuf pool, in that
>>> it
>>>> replaces
>>>> the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure we
>>>> want
>>>> to introduce a third method of creating a mempool to the developers. If we
>>>> introduced this, we would then have:
>>>> 1. rte_pktmbuf_pool_create()
>>>> 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would
>>>>     use the configured custom handler)
>>>> 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set followed
>>>>     by a call to rte_mempool_set_ops_byname() (would allow several different
>>>> custom
>>>>     handlers to be used in one application
>>>>
>>>> Does anyone else have an opinion on this? Oliver, Jerin, Jan?
>>> As I mentioned earlier, My take is not to create the separate API's for
>>> external mempool handlers.In my view, It's same,  just that sepreate
>>> mempool handler through function pointers.
>>>
>>> To keep the backward compatibility, I think we can extend the flags
>>> in rte_mempool_create and have a single API external/internal pool
>>> creation(makes easy for existing applications too, add a just mempool
>>> flag command line argument to existing applications to choose the
>>> mempool handler)
>> May be I am interpreting it wrong, but, are you suggesting a single mempool handler for all buffer/packet needs of an application (passed as command line argument)?
>> That would be inefficient especially for cases where pool is backed by a hardware. The application wouldn't want its generic buffers to consume hardware resource which would be better used for packets.
> It may vary from platform to platform or particular use case. For instance,
> the HW external pool manager for generic buffers may scale better than SW multi
> producers/multi-consumer implementation when the number of cores > N
> (as no locking involved in enqueue/dequeue(again it is depended on
> specific HW implementation))
>
> I thought their no harm in selecting the external pool handlers
> in root level itself(rte_mempool_create) as by default it is
> SW MP/MC and it just an option to override if the application wants it.
>
> Jerin
>


So, how about we go with the following, based on Shreyansh's suggestion:

1. Add in #define MEMPOOL_F_EMM_ALLOC 0x0010  (EMM for External Mempool 
Manager)

2. Check this bit in rte_mempool_create() just before the other bits are 
checked (by the way, the flags check has been moved to 
rte_mempool_create(), as per an earlier patchset, but was inadvertantly 
reverted)

     /*
      * First check to see if we use the config'd mempool hander.
      * Then examine the combinations of SP/SC/MP/MC flags to
      * set the correct index into the table of ops structs.
      */
     if (flags & MEMPOOL_F_EMM_ALLOC)
         rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS)
     else (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
         rte_mempool_set_ops_byname(mp, "ring_sp_sc");
     else if (flags & MEMPOOL_F_SP_PUT)
         rte_mempool_set_ops_byname(mp, "ring_sp_mc");
     else if (flags & MEMPOOL_F_SC_GET)
         rte_mempool_set_ops_byname(mp, "ring_mp_sc");
     else
         rte_mempool_set_ops_byname(mp, "ring_mp_mc");

3. Modify rte_pktmbuf_pool_create to pass the bit to rte_mempool_create

-        sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
+        sizeof(struct rte_pktmbuf_pool_private), socket_id,
+        MEMPOOL_F_PKT_ALLOC);


This will allow legacy apps to use one external handler ( as defined 
RTE_MBUF_DEFAULT_MEMPOOL_OPS) by adding the MEMPOOL_F_EMM_ALLOC bit to 
their flags in the call to rte_mempool_create().

Of course, if an app wants to use more than one external handler, they 
can call create_empty and set_ops_byname() for each mempool they create.

Regards,
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-09 13:18                               ` Hunt, David
@ 2016-06-09 13:37                                 ` Jerin Jacob
  0 siblings, 0 replies; 238+ messages in thread
From: Jerin Jacob @ 2016-06-09 13:37 UTC (permalink / raw)
  To: Hunt, David; +Cc: Shreyansh Jain, dev, olivier.matz, viktorin

On Thu, Jun 09, 2016 at 02:18:57PM +0100, Hunt, David wrote:
> 
> 
> > > > As I mentioned earlier, My take is not to create the separate API's for
> > > > external mempool handlers.In my view, It's same,  just that sepreate
> > > > mempool handler through function pointers.
> > > > 
> > > > To keep the backward compatibility, I think we can extend the flags
> > > > in rte_mempool_create and have a single API external/internal pool
> > > > creation(makes easy for existing applications too, add a just mempool
> > > > flag command line argument to existing applications to choose the
> > > > mempool handler)
> > > May be I am interpreting it wrong, but, are you suggesting a single mempool handler for all buffer/packet needs of an application (passed as command line argument)?
> > > That would be inefficient especially for cases where pool is backed by a hardware. The application wouldn't want its generic buffers to consume hardware resource which would be better used for packets.
> > It may vary from platform to platform or particular use case. For instance,
> > the HW external pool manager for generic buffers may scale better than SW multi
> > producers/multi-consumer implementation when the number of cores > N
> > (as no locking involved in enqueue/dequeue(again it is depended on
> > specific HW implementation))
> > 
> > I thought their no harm in selecting the external pool handlers
> > in root level itself(rte_mempool_create) as by default it is
> > SW MP/MC and it just an option to override if the application wants it.
> > 
> > Jerin
> > 
> 
> 
> So, how about we go with the following, based on Shreyansh's suggestion:
> 
> 1. Add in #define MEMPOOL_F_EMM_ALLOC 0x0010  (EMM for External Mempool
> Manager)
> 
> 2. Check this bit in rte_mempool_create() just before the other bits are
> checked (by the way, the flags check has been moved to rte_mempool_create(),
> as per an earlier patchset, but was inadvertantly reverted)
> 
>     /*
>      * First check to see if we use the config'd mempool hander.
>      * Then examine the combinations of SP/SC/MP/MC flags to
>      * set the correct index into the table of ops structs.
>      */
>     if (flags & MEMPOOL_F_EMM_ALLOC)
>         rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS)
>     else (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
>         rte_mempool_set_ops_byname(mp, "ring_sp_sc");
>     else if (flags & MEMPOOL_F_SP_PUT)
>         rte_mempool_set_ops_byname(mp, "ring_sp_mc");
>     else if (flags & MEMPOOL_F_SC_GET)
>         rte_mempool_set_ops_byname(mp, "ring_mp_sc");
>     else
>         rte_mempool_set_ops_byname(mp, "ring_mp_mc");
> 
> 3. Modify rte_pktmbuf_pool_create to pass the bit to rte_mempool_create
> 
> -        sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
> +        sizeof(struct rte_pktmbuf_pool_private), socket_id,
> +        MEMPOOL_F_PKT_ALLOC);
> 
> 
> This will allow legacy apps to use one external handler ( as defined
> RTE_MBUF_DEFAULT_MEMPOOL_OPS) by adding the MEMPOOL_F_EMM_ALLOC bit to their
> flags in the call to rte_mempool_create().
> 
> Of course, if an app wants to use more than one external handler, they can
> call create_empty and set_ops_byname() for each mempool they create.

+1

Since rte_pktmbuf_pool_create does not take flag, I think this the only
option left with for legacy apps.

> 
> Regards,
> Dave.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-09 13:09                         ` Jan Viktorin
@ 2016-06-10  7:29                           ` Olivier Matz
  2016-06-10  8:49                             ` Jan Viktorin
                                               ` (3 more replies)
  0 siblings, 4 replies; 238+ messages in thread
From: Olivier Matz @ 2016-06-10  7:29 UTC (permalink / raw)
  To: Jan Viktorin, Hunt, David; +Cc: Shreyansh Jain, dev, jerin.jacob

Hi,

On 06/09/2016 03:09 PM, Jan Viktorin wrote:
>>> My suggestion is to have an additional flag,
>>> 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
>>> 
>>> ... #define MEMPOOL_F_SC_GET    0x0008 #define
>>> MEMPOOL_F_PKT_ALLOC 0x0010 ...
>>> 
>>> in rte_mempool_create_empty: ... after checking the other
>>> MEMPOOL_F_* flags...
>>> 
>>> if (flags & MEMPOOL_F_PKT_ALLOC) rte_mempool_set_ops_byname(mp,
>>> RTE_MBUF_DEFAULT_MEMPOOL_OPS)
>>> 
>>> And removing the redundant call to rte_mempool_set_ops_byname()
>>> in rte_pktmbuf_create_pool().
>>> 
>>> Thereafter, rte_pktmbuf_pool_create can be changed to:
>>> 
>>> ... mp = rte_mempool_create_empty(name, n, elt_size, cache_size, 
>>> -        sizeof(struct rte_pktmbuf_pool_private), socket_id, 0); 
>>> +        sizeof(struct rte_pktmbuf_pool_private), socket_id, +
>>> MEMPOOL_F_PKT_ALLOC); if (mp == NULL) return NULL;
>> 
>> Yes, this would simplify somewhat the creation of a pktmbuf pool,
>> in that it replaces the rte_mempool_set_ops_byname with a flag bit.
>> However, I'm not sure we want to introduce a third method of
>> creating a mempool to the developers. If we introduced this, we
>> would then have: 1. rte_pktmbuf_pool_create() 2.
>> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which
>> would use the configured custom handler) 3.
>> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set
>> followed by a call to rte_mempool_set_ops_byname() (would allow
>> several different custom handlers to be used in one application
>> 
>> Does anyone else have an opinion on this? Oliver, Jerin, Jan?
> 
> I am quite careful about this topic as I don't feel to be very
> involved in all the use cases. My opinion is that the _new API_
> should be able to cover all cases and the _old API_ should be
> backwards compatible, however, built on top of the _new API_.
> 
> I.e. I think, the flags MEMPOOL_F_SP_PUT, MEMPOOL_F_SC_GET (relicts
> of the old API) should be accepted by the old API ONLY. The
> rte_mempool_create_empty should not process them.

The rte_mempool_create_empty() function already processes these flags
(SC_GET, SP_PUT) as of today.

> Similarly for a potential MEMPOOL_F_PKT_ALLOC, I would not polute the
> rte_mempool_create_empty by this anymore.

+1

I think we should stop adding flags. Flags are prefered for independent
features. Here, what would be the behavior with MEMPOOL_F_PKT_ALLOC +
MEMPOOL_F_SP_PUT?

Another reason to not add this flag is the rte_mempool library
should not be aware of mbufs. The mbuf pools rely on mempools, but
not the contrary.


> In overall we would get exactly 2 approaches (and not more):
> 
> * using rte_mempool_create with flags calling the
> rte_mempool_create_empty and rte_mempool_set_ops_byname internally
> (so this layer can be marked as deprecated and removed in the
> future)

Agree. This was one of the objective of my mempool rework patchset:
provide a more flexible API, and avoid functions with 10 to 15
arguments.

> * using rte_mempool_create_empty + rte_mempool_set_ops_byname -
> allowing any customizations but with the necessity to change the
> applications (new preferred API)

Yes.
And if required, maybe a third API is possible in case of mbuf pools.
Indeed, the applications are encouraged to use rte_pktmbuf_pool_create()
to create a pool of mbuf instead of mempool API. If an application
wants to select specific ops for it, we could add:

  rte_pktmbuf_pool_create_<something>(..., name)

instead of using the mempool API.
I think this is what Shreyansh suggests when he says:

  It sounds fine if calls to rte_mempool_* can select an external
  handler *optionally* - but, if we pass it as command line, it would
  be binding (at least, semantically) for rte_pktmbuf_* calls as well.
  Isn't it?


> So, the old applications can stay as they are (OK, with a possible
> new flag MEMPOOL_F_PKT_ALLOC) and the new one can do the same but you
> have to set the ops explicitly.
> 
> The more different ways of using those APIs we have, the greater hell
> we have to maintain.

I'm really not in favor of a MEMPOOL_F_PKT_ALLOC flag in mempool api.

I think David's patch is already a good step forward. Let's do it
step by step. Next step is maybe to update some applications (at least
testpmd) to select a new pool handler dynamically.

Regards,
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-10  7:29                           ` Olivier Matz
@ 2016-06-10  8:49                             ` Jan Viktorin
  2016-06-10  9:02                               ` Hunt, David
  2016-06-10  9:34                             ` Hunt, David
                                               ` (2 subsequent siblings)
  3 siblings, 1 reply; 238+ messages in thread
From: Jan Viktorin @ 2016-06-10  8:49 UTC (permalink / raw)
  To: Olivier Matz; +Cc: Hunt, David, Shreyansh Jain, dev, jerin.jacob

On Fri, 10 Jun 2016 09:29:44 +0200
Olivier Matz <olivier.matz@6wind.com> wrote:

> Hi,
> 
> On 06/09/2016 03:09 PM, Jan Viktorin wrote:
> >>> My suggestion is to have an additional flag,
> >>> 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
> >>> 
> >>> ... #define MEMPOOL_F_SC_GET    0x0008 #define
> >>> MEMPOOL_F_PKT_ALLOC 0x0010 ...
> >>> 
> >>> in rte_mempool_create_empty: ... after checking the other
> >>> MEMPOOL_F_* flags...
> >>> 
> >>> if (flags & MEMPOOL_F_PKT_ALLOC) rte_mempool_set_ops_byname(mp,
> >>> RTE_MBUF_DEFAULT_MEMPOOL_OPS)
> >>> 
> >>> And removing the redundant call to rte_mempool_set_ops_byname()
> >>> in rte_pktmbuf_create_pool().
> >>> 
> >>> Thereafter, rte_pktmbuf_pool_create can be changed to:
> >>> 
> >>> ... mp = rte_mempool_create_empty(name, n, elt_size, cache_size, 
> >>> -        sizeof(struct rte_pktmbuf_pool_private), socket_id, 0); 
> >>> +        sizeof(struct rte_pktmbuf_pool_private), socket_id, +
> >>> MEMPOOL_F_PKT_ALLOC); if (mp == NULL) return NULL;  
> >> 
> >> Yes, this would simplify somewhat the creation of a pktmbuf pool,
> >> in that it replaces the rte_mempool_set_ops_byname with a flag bit.
> >> However, I'm not sure we want to introduce a third method of
> >> creating a mempool to the developers. If we introduced this, we
> >> would then have: 1. rte_pktmbuf_pool_create() 2.
> >> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which
> >> would use the configured custom handler) 3.
> >> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set
> >> followed by a call to rte_mempool_set_ops_byname() (would allow
> >> several different custom handlers to be used in one application
> >> 
> >> Does anyone else have an opinion on this? Oliver, Jerin, Jan?  
> > 
> > I am quite careful about this topic as I don't feel to be very
> > involved in all the use cases. My opinion is that the _new API_
> > should be able to cover all cases and the _old API_ should be
> > backwards compatible, however, built on top of the _new API_.
> > 
> > I.e. I think, the flags MEMPOOL_F_SP_PUT, MEMPOOL_F_SC_GET (relicts
> > of the old API) should be accepted by the old API ONLY. The
> > rte_mempool_create_empty should not process them.  
> 
> The rte_mempool_create_empty() function already processes these flags
> (SC_GET, SP_PUT) as of today.

Yes, I consider it quite strange. When thinking more about the mempool API,
I'd move the flags processing to the rte_mempool_create. Semantically, it
makes more sense as the "empty" clearly describes that it is empty. But with
the flags, it is not... What is the reason to have those flags there?

> 
> > Similarly for a potential MEMPOOL_F_PKT_ALLOC, I would not polute the
> > rte_mempool_create_empty by this anymore.  
> 
> +1
> 
> I think we should stop adding flags. Flags are prefered for independent
> features. Here, what would be the behavior with MEMPOOL_F_PKT_ALLOC +
> MEMPOOL_F_SP_PUT?

+1 :)

> 
> Another reason to not add this flag is the rte_mempool library
> should not be aware of mbufs. The mbuf pools rely on mempools, but
> not the contrary.
> 
> 
> > In overall we would get exactly 2 approaches (and not more):
> > future)  

Well, now I can see that I've just written the same thing but with my own words :).

> > 
> > * using rte_mempool_create with flags calling the
> > rte_mempool_create_empty and rte_mempool_set_ops_byname internally
> > (so this layer can be marked as deprecated and removed in the
> 
> Agree. This was one of the objective of my mempool rework patchset:
> provide a more flexible API, and avoid functions with 10 to 15
> arguments.
> 
> > * using rte_mempool_create_empty + rte_mempool_set_ops_byname -
> > allowing any customizations but with the necessity to change the
> > applications (new preferred API)  
> 
> Yes.
> And if required, maybe a third API is possible in case of mbuf pools.

I don't count this. It's an upper layer, but yes.

> Indeed, the applications are encouraged to use rte_pktmbuf_pool_create()
> to create a pool of mbuf instead of mempool API. If an application
> wants to select specific ops for it, we could add:
> 
>   rte_pktmbuf_pool_create_<something>(..., name)

Seems like a good idea.

> 
> instead of using the mempool API.
> I think this is what Shreyansh suggests when he says:
> 
>   It sounds fine if calls to rte_mempool_* can select an external
>   handler *optionally* - but, if we pass it as command line, it would
>   be binding (at least, semantically) for rte_pktmbuf_* calls as well.
>   Isn't it?

I think, the question here is whether the processing of such optional
command line specification is up to the application or up the the DPDK
core. If we leave it in applications, it's just a matter of API.

> 
> > So, the old applications can stay as they are (OK, with a possible
> > new flag MEMPOOL_F_PKT_ALLOC) and the new one can do the same but you
> > have to set the ops explicitly.
> > 
> > The more different ways of using those APIs we have, the greater hell
> > we have to maintain.  
> 
> I'm really not in favor of a MEMPOOL_F_PKT_ALLOC flag in mempool api.

Your arguments are valid, +1.

> 
> I think David's patch is already a good step forward. Let's do it
> step by step. Next step is maybe to update some applications (at least
> testpmd) to select a new pool handler dynamically.

We can probably make an API to process the command line by applications
that configures a mempool automatically. So, it would be a oneliner or
something like that. Like rte_mempool_create_from_devargs(...).

Jan

> 
> Regards,
> Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-10  8:49                             ` Jan Viktorin
@ 2016-06-10  9:02                               ` Hunt, David
  0 siblings, 0 replies; 238+ messages in thread
From: Hunt, David @ 2016-06-10  9:02 UTC (permalink / raw)
  To: Jan Viktorin, Olivier Matz; +Cc: Shreyansh Jain, dev, jerin.jacob

Hi Jan,

On 10/6/2016 9:49 AM, Jan Viktorin wrote:
> On Fri, 10 Jun 2016 09:29:44 +0200
> Olivier Matz <olivier.matz@6wind.com> wrote:
>
>> Hi,
>>
>> On 06/09/2016 03:09 PM, Jan Viktorin wrote:
>>>>> My suggestion is to have an additional flag,
>>>>> 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
>>>>>
>>>>> ... #define MEMPOOL_F_SC_GET    0x0008 #define
>>>>> MEMPOOL_F_PKT_ALLOC 0x0010 ...
>>>>>
>>>>> in rte_mempool_create_empty: ... after checking the other
>>>>> MEMPOOL_F_* flags...
>>>>>
>>>>> if (flags & MEMPOOL_F_PKT_ALLOC) rte_mempool_set_ops_byname(mp,
>>>>> RTE_MBUF_DEFAULT_MEMPOOL_OPS)
>>>>>
>>>>> And removing the redundant call to rte_mempool_set_ops_byname()
>>>>> in rte_pktmbuf_create_pool().
>>>>>
>>>>> Thereafter, rte_pktmbuf_pool_create can be changed to:
>>>>>
>>>>> ... mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
>>>>> -        sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
>>>>> +        sizeof(struct rte_pktmbuf_pool_private), socket_id, +
>>>>> MEMPOOL_F_PKT_ALLOC); if (mp == NULL) return NULL;
>>>> Yes, this would simplify somewhat the creation of a pktmbuf pool,
>>>> in that it replaces the rte_mempool_set_ops_byname with a flag bit.
>>>> However, I'm not sure we want to introduce a third method of
>>>> creating a mempool to the developers. If we introduced this, we
>>>> would then have: 1. rte_pktmbuf_pool_create() 2.
>>>> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which
>>>> would use the configured custom handler) 3.
>>>> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set
>>>> followed by a call to rte_mempool_set_ops_byname() (would allow
>>>> several different custom handlers to be used in one application
>>>>
>>>> Does anyone else have an opinion on this? Oliver, Jerin, Jan?
>>> I am quite careful about this topic as I don't feel to be very
>>> involved in all the use cases. My opinion is that the _new API_
>>> should be able to cover all cases and the _old API_ should be
>>> backwards compatible, however, built on top of the _new API_.
>>>
>>> I.e. I think, the flags MEMPOOL_F_SP_PUT, MEMPOOL_F_SC_GET (relicts
>>> of the old API) should be accepted by the old API ONLY. The
>>> rte_mempool_create_empty should not process them.
>> The rte_mempool_create_empty() function already processes these flags
>> (SC_GET, SP_PUT) as of today.
> Yes, I consider it quite strange. When thinking more about the mempool API,
> I'd move the flags processing to the rte_mempool_create. Semantically, it
> makes more sense as the "empty" clearly describes that it is empty. But with
> the flags, it is not... What is the reason to have those flags there?

Yes, they should be in rte_mempool_create. There were in an earlier 
patch, but regressed.
I'll have them in rte_mempool_create in the next patch.


[...]


Rgds,
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-10  7:29                           ` Olivier Matz
  2016-06-10  8:49                             ` Jan Viktorin
@ 2016-06-10  9:34                             ` Hunt, David
  2016-06-10 11:29                               ` Shreyansh Jain
  2016-06-10 11:13                             ` Jerin Jacob
  2016-06-10 11:37                             ` Shreyansh Jain
  3 siblings, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-06-10  9:34 UTC (permalink / raw)
  To: Olivier Matz, Jan Viktorin; +Cc: Shreyansh Jain, dev, jerin.jacob

Hi all,

On 10/6/2016 8:29 AM, Olivier Matz wrote:
> Hi,
>
> On 06/09/2016 03:09 PM, Jan Viktorin wrote:
>>>> My suggestion is to have an additional flag,
>>>> 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
>>>>
>>>> ... #define MEMPOOL_F_SC_GET    0x0008 #define
>>>> MEMPOOL_F_PKT_ALLOC 0x0010 ...
>>>>
>>>> in rte_mempool_create_empty: ... after checking the other
>>>> MEMPOOL_F_* flags...
>>>>
>>>> if (flags & MEMPOOL_F_PKT_ALLOC) rte_mempool_set_ops_byname(mp,
>>>> RTE_MBUF_DEFAULT_MEMPOOL_OPS)
>>>>
>>>> And removing the redundant call to rte_mempool_set_ops_byname()
>>>> in rte_pktmbuf_create_pool().
>>>>
>>>> Thereafter, rte_pktmbuf_pool_create can be changed to:
>>>>
>>>> ... mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
>>>> -        sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
>>>> +        sizeof(struct rte_pktmbuf_pool_private), socket_id, +
>>>> MEMPOOL_F_PKT_ALLOC); if (mp == NULL) return NULL;
>>> Yes, this would simplify somewhat the creation of a pktmbuf pool,
>>> in that it replaces the rte_mempool_set_ops_byname with a flag bit.
>>> However, I'm not sure we want to introduce a third method of
>>> creating a mempool to the developers. If we introduced this, we
>>> would then have: 1. rte_pktmbuf_pool_create() 2.
>>> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which
>>> would use the configured custom handler) 3.
>>> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set
>>> followed by a call to rte_mempool_set_ops_byname() (would allow
>>> several different custom handlers to be used in one application
>>>
>>> Does anyone else have an opinion on this? Oliver, Jerin, Jan?
>> I am quite careful about this topic as I don't feel to be very
>> involved in all the use cases. My opinion is that the _new API_
>> should be able to cover all cases and the _old API_ should be
>> backwards compatible, however, built on top of the _new API_.
>>
>> I.e. I think, the flags MEMPOOL_F_SP_PUT, MEMPOOL_F_SC_GET (relicts
>> of the old API) should be accepted by the old API ONLY. The
>> rte_mempool_create_empty should not process them.
> The rte_mempool_create_empty() function already processes these flags
> (SC_GET, SP_PUT) as of today.
>
>> Similarly for a potential MEMPOOL_F_PKT_ALLOC, I would not polute the
>> rte_mempool_create_empty by this anymore.
> +1
>
> I think we should stop adding flags. Flags are prefered for independent
> features. Here, what would be the behavior with MEMPOOL_F_PKT_ALLOC +
> MEMPOOL_F_SP_PUT?
>
> Another reason to not add this flag is the rte_mempool library
> should not be aware of mbufs. The mbuf pools rely on mempools, but
> not the contrary.
>
>
>> In overall we would get exactly 2 approaches (and not more):
>>
>> * using rte_mempool_create with flags calling the
>> rte_mempool_create_empty and rte_mempool_set_ops_byname internally
>> (so this layer can be marked as deprecated and removed in the
>> future)
> Agree. This was one of the objective of my mempool rework patchset:
> provide a more flexible API, and avoid functions with 10 to 15
> arguments.
>
>> * using rte_mempool_create_empty + rte_mempool_set_ops_byname -
>> allowing any customizations but with the necessity to change the
>> applications (new preferred API)
> Yes.
> And if required, maybe a third API is possible in case of mbuf pools.
> Indeed, the applications are encouraged to use rte_pktmbuf_pool_create()
> to create a pool of mbuf instead of mempool API. If an application
> wants to select specific ops for it, we could add:
>
>    rte_pktmbuf_pool_create_<something>(..., name)
>
> instead of using the mempool API.
> I think this is what Shreyansh suggests when he says:
>
>    It sounds fine if calls to rte_mempool_* can select an external
>    handler *optionally* - but, if we pass it as command line, it would
>    be binding (at least, semantically) for rte_pktmbuf_* calls as well.
>    Isn't it?
>
>
>> So, the old applications can stay as they are (OK, with a possible
>> new flag MEMPOOL_F_PKT_ALLOC) and the new one can do the same but you
>> have to set the ops explicitly.
>>
>> The more different ways of using those APIs we have, the greater hell
>> we have to maintain.
> I'm really not in favor of a MEMPOOL_F_PKT_ALLOC flag in mempool api.

I would tend to agree, even though yesterday I proposed making that 
change. However,
thinking about it some more, I'm not totally happy with the
MEMPOOL_F_PKT_ALLOC addition. It adds yet another method of creating a 
mempool,
and I think that may introduce some confusion with some developers.

I also like the suggestion of rte_pktmbuf_pool_create_<something>(..., 
name) suggested
above, I was thinking the same myself last night, and I would prefer 
that rather than adding the
MEMPOOL_F_PKT_ALLOC flag. Developers can add that function into their 
apps as a wrapper
to rte_mempool_create_empty->rte_mempool_set_ops_byname should the need 
to have
more than one pktmbuf allocator. Otherwise they can use the one that 
makes use of the
RTE_MBUF_DEFAULT_MEMPOOL_OPS config setting.


> I think David's patch is already a good step forward. Let's do it
> step by step. Next step is maybe to update some applications (at least
> testpmd) to select a new pool handler dynamically.
>
> Regards,
> Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-10  7:29                           ` Olivier Matz
  2016-06-10  8:49                             ` Jan Viktorin
  2016-06-10  9:34                             ` Hunt, David
@ 2016-06-10 11:13                             ` Jerin Jacob
  2016-06-10 11:37                             ` Shreyansh Jain
  3 siblings, 0 replies; 238+ messages in thread
From: Jerin Jacob @ 2016-06-10 11:13 UTC (permalink / raw)
  To: Olivier Matz; +Cc: Jan Viktorin, Hunt, David, Shreyansh Jain, dev

On Fri, Jun 10, 2016 at 09:29:44AM +0200, Olivier Matz wrote:
> Hi,
> 
> On 06/09/2016 03:09 PM, Jan Viktorin wrote:
> >>> My suggestion is to have an additional flag,
> >>> 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
> >>> 
> >>> ... #define MEMPOOL_F_SC_GET    0x0008 #define
> >>> MEMPOOL_F_PKT_ALLOC 0x0010 ...
> >>> 
> >>> in rte_mempool_create_empty: ... after checking the other
> >>> MEMPOOL_F_* flags...
> >>> 
> >>> if (flags & MEMPOOL_F_PKT_ALLOC) rte_mempool_set_ops_byname(mp,
> >>> RTE_MBUF_DEFAULT_MEMPOOL_OPS)
> >>> 
> >>> And removing the redundant call to rte_mempool_set_ops_byname()
> >>> in rte_pktmbuf_create_pool().
> >>> 
> >>> Thereafter, rte_pktmbuf_pool_create can be changed to:
> >>> 
> >>> ... mp = rte_mempool_create_empty(name, n, elt_size, cache_size, 
> >>> -        sizeof(struct rte_pktmbuf_pool_private), socket_id, 0); 
> >>> +        sizeof(struct rte_pktmbuf_pool_private), socket_id, +
> >>> MEMPOOL_F_PKT_ALLOC); if (mp == NULL) return NULL;
> >> 
> >> Yes, this would simplify somewhat the creation of a pktmbuf pool,
> >> in that it replaces the rte_mempool_set_ops_byname with a flag bit.
> >> However, I'm not sure we want to introduce a third method of
> >> creating a mempool to the developers. If we introduced this, we
> >> would then have: 1. rte_pktmbuf_pool_create() 2.
> >> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which
> >> would use the configured custom handler) 3.
> >> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set
> >> followed by a call to rte_mempool_set_ops_byname() (would allow
> >> several different custom handlers to be used in one application
> >> 
> >> Does anyone else have an opinion on this? Oliver, Jerin, Jan?
> > 
> > I am quite careful about this topic as I don't feel to be very
> > involved in all the use cases. My opinion is that the _new API_
> > should be able to cover all cases and the _old API_ should be
> > backwards compatible, however, built on top of the _new API_.
> > 
> > I.e. I think, the flags MEMPOOL_F_SP_PUT, MEMPOOL_F_SC_GET (relicts
> > of the old API) should be accepted by the old API ONLY. The
> > rte_mempool_create_empty should not process them.
> 
> The rte_mempool_create_empty() function already processes these flags
> (SC_GET, SP_PUT) as of today.
> 
> > Similarly for a potential MEMPOOL_F_PKT_ALLOC, I would not polute the
> > rte_mempool_create_empty by this anymore.
> 
> +1
> 
> I think we should stop adding flags. Flags are prefered for independent
> features. Here, what would be the behavior with MEMPOOL_F_PKT_ALLOC +
> MEMPOOL_F_SP_PUT?

+1

MEMPOOL_F_PKT_ALLOC introduced only for legacy application to make it
work with external pool manager. If we are not taking that path(and
expects applications to change) then IMO we can we  have proper mempool
create API to accommodate the external pool  and deprecate
rte_mempool_create/rte_mempool_xmem_create

like,
1) Remove 10 to 15 arguments pool create and make it as structure(more
forward looking)
2) Remove flags
3) Have the same API for external and internal mempool create and
differentiate through handler through "name". NULL can be default
mempool handler(MPMC)


> 
> Another reason to not add this flag is the rte_mempool library
> should not be aware of mbufs. The mbuf pools rely on mempools, but
> not the contrary.
> 

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-10  9:34                             ` Hunt, David
@ 2016-06-10 11:29                               ` Shreyansh Jain
  0 siblings, 0 replies; 238+ messages in thread
From: Shreyansh Jain @ 2016-06-10 11:29 UTC (permalink / raw)
  To: Hunt, David; +Cc: dev, jerin.jacob, Olivier Matz, Jan Viktorin

Hi David,

> -----Original Message-----
> From: Hunt, David [mailto:david.hunt@intel.com]
> Sent: Friday, June 10, 2016 3:05 PM
> To: Olivier Matz <olivier.matz@6wind.com>; Jan Viktorin
> <viktorin@rehivetech.com>
> Cc: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org;
> jerin.jacob@caviumnetworks.com
> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> operations
> 
> Hi all,
> 
> On 10/6/2016 8:29 AM, Olivier Matz wrote:
> > Hi,
> >

[...snip...]
> >
> >
> >> So, the old applications can stay as they are (OK, with a possible
> >> new flag MEMPOOL_F_PKT_ALLOC) and the new one can do the same but you
> >> have to set the ops explicitly.
> >>
> >> The more different ways of using those APIs we have, the greater hell
> >> we have to maintain.
> > I'm really not in favor of a MEMPOOL_F_PKT_ALLOC flag in mempool api.
> 
> I would tend to agree, even though yesterday I proposed making that
> change. However,
> thinking about it some more, I'm not totally happy with the
> MEMPOOL_F_PKT_ALLOC addition. It adds yet another method of creating a
> mempool,
> and I think that may introduce some confusion with some developers.
> 
> I also like the suggestion of rte_pktmbuf_pool_create_<something>(...,
> name) suggested
> above, I was thinking the same myself last night, and I would prefer
> that rather than adding the
> MEMPOOL_F_PKT_ALLOC flag. Developers can add that function into their
> apps as a wrapper
> to rte_mempool_create_empty->rte_mempool_set_ops_byname should the need
> to have
> more than one pktmbuf allocator. Otherwise they can use the one that
> makes use of the
> RTE_MBUF_DEFAULT_MEMPOOL_OPS config setting.

+1

> 
> 
> > I think David's patch is already a good step forward. Let's do it
> > step by step. Next step is maybe to update some applications (at least
> > testpmd) to select a new pool handler dynamically.
> >
> > Regards,
> > Olivier

Thanks.

-
Shreyansh

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v8 1/3] mempool: support external mempool operations
  2016-06-10  7:29                           ` Olivier Matz
                                               ` (2 preceding siblings ...)
  2016-06-10 11:13                             ` Jerin Jacob
@ 2016-06-10 11:37                             ` Shreyansh Jain
  3 siblings, 0 replies; 238+ messages in thread
From: Shreyansh Jain @ 2016-06-10 11:37 UTC (permalink / raw)
  To: Olivier Matz; +Cc: dev, jerin.jacob, Jan Viktorin, Hunt, David

Hi Olivier,

> -----Original Message-----
> From: Olivier Matz [mailto:olivier.matz@6wind.com]
> Sent: Friday, June 10, 2016 1:00 PM
> To: Jan Viktorin <viktorin@rehivetech.com>; Hunt, David
> <david.hunt@intel.com>
> Cc: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org;
> jerin.jacob@caviumnetworks.com
> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> operations
> 
> Hi,
> 
> On 06/09/2016 03:09 PM, Jan Viktorin wrote:
> >>> My suggestion is to have an additional flag,
> >>> 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
> >>>
> >>> ... #define MEMPOOL_F_SC_GET    0x0008 #define
> >>> MEMPOOL_F_PKT_ALLOC 0x0010 ...
> >>>
> >>> in rte_mempool_create_empty: ... after checking the other
> >>> MEMPOOL_F_* flags...
> >>>
> >>> if (flags & MEMPOOL_F_PKT_ALLOC) rte_mempool_set_ops_byname(mp,
> >>> RTE_MBUF_DEFAULT_MEMPOOL_OPS)
> >>>
> >>> And removing the redundant call to rte_mempool_set_ops_byname()
> >>> in rte_pktmbuf_create_pool().
> >>>
> >>> Thereafter, rte_pktmbuf_pool_create can be changed to:
> >>>
> >>> ... mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
> >>> -        sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
> >>> +        sizeof(struct rte_pktmbuf_pool_private), socket_id, +
> >>> MEMPOOL_F_PKT_ALLOC); if (mp == NULL) return NULL;
> >>
> >> Yes, this would simplify somewhat the creation of a pktmbuf pool,
> >> in that it replaces the rte_mempool_set_ops_byname with a flag bit.
> >> However, I'm not sure we want to introduce a third method of
> >> creating a mempool to the developers. If we introduced this, we
> >> would then have: 1. rte_pktmbuf_pool_create() 2.
> >> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which
> >> would use the configured custom handler) 3.
> >> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set
> >> followed by a call to rte_mempool_set_ops_byname() (would allow
> >> several different custom handlers to be used in one application
> >>
> >> Does anyone else have an opinion on this? Oliver, Jerin, Jan?
> >
> > I am quite careful about this topic as I don't feel to be very
> > involved in all the use cases. My opinion is that the _new API_
> > should be able to cover all cases and the _old API_ should be
> > backwards compatible, however, built on top of the _new API_.
> >
> > I.e. I think, the flags MEMPOOL_F_SP_PUT, MEMPOOL_F_SC_GET (relicts
> > of the old API) should be accepted by the old API ONLY. The
> > rte_mempool_create_empty should not process them.
> 
> The rte_mempool_create_empty() function already processes these flags
> (SC_GET, SP_PUT) as of today.
> 
> > Similarly for a potential MEMPOOL_F_PKT_ALLOC, I would not polute the
> > rte_mempool_create_empty by this anymore.
> 
> +1
> 
> I think we should stop adding flags. Flags are prefered for independent
> features. Here, what would be the behavior with MEMPOOL_F_PKT_ALLOC +
> MEMPOOL_F_SP_PUT?
> 
> Another reason to not add this flag is the rte_mempool library
> should not be aware of mbufs. The mbuf pools rely on mempools, but
> not the contrary.

Agree - mempool should be agnostic of the mbufs using it.
But, mempool should be aware of the allocator it is using, in my opinion.

And, agree with your argument of "MEMPOOL_F_PKT_ALLOC + MEMPOOL_F_SP_PUT" - it is bad semantics.

> 
> 
> > In overall we would get exactly 2 approaches (and not more):
> >
> > * using rte_mempool_create with flags calling the
> > rte_mempool_create_empty and rte_mempool_set_ops_byname internally
> > (so this layer can be marked as deprecated and removed in the
> > future)
> 
> Agree. This was one of the objective of my mempool rework patchset:
> provide a more flexible API, and avoid functions with 10 to 15
> arguments.
> 
> > * using rte_mempool_create_empty + rte_mempool_set_ops_byname -
> > allowing any customizations but with the necessity to change the
> > applications (new preferred API)
> 
> Yes.
> And if required, maybe a third API is possible in case of mbuf pools.
> Indeed, the applications are encouraged to use rte_pktmbuf_pool_create()
> to create a pool of mbuf instead of mempool API. If an application
> wants to select specific ops for it, we could add:
> 
>   rte_pktmbuf_pool_create_<something>(..., name)
> 
> instead of using the mempool API.
> I think this is what Shreyansh suggests when he says:
> 
>   It sounds fine if calls to rte_mempool_* can select an external
>   handler *optionally* - but, if we pass it as command line, it would
>   be binding (at least, semantically) for rte_pktmbuf_* calls as well.
>   Isn't it?

Er. I think I should clarify the context.
I was referring to the 'command-line-argument-for-selecting-external-mem-allocator' comment. I was just highlighting that probably it would cause conflict with the two APIs.

But, having said that, I agree with you about "...applications are encouraged to use rte_pktmbuf_pool_create() to create a pool of mbuf...".

> 
> 
> > So, the old applications can stay as they are (OK, with a possible
> > new flag MEMPOOL_F_PKT_ALLOC) and the new one can do the same but you
> > have to set the ops explicitly.
> >
> > The more different ways of using those APIs we have, the greater hell
> > we have to maintain.
> 
> I'm really not in favor of a MEMPOOL_F_PKT_ALLOC flag in mempool api.

Agree. Flags are not pretty way of handling mutually exclusive features - they are not intuitive.
Semantically cleaner API is better approach.

> 
> I think David's patch is already a good step forward. Let's do it
> step by step. Next step is maybe to update some applications (at least
> testpmd) to select a new pool handler dynamically.

Fair enough. We can slowly make changes to all applications to show 'best practice' of creating buffer or packet pools.

> 
> Regards,
> Olivier

-
Shreyansh

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v9 0/3] mempool: add external mempool manager
  2016-06-03 14:58             ` [PATCH v8 0/5] mempool: add external mempool manager David Hunt
                                 ` (2 preceding siblings ...)
  2016-06-03 14:58               ` [PATCH v8 3/3] mbuf: make default mempool ops configurable at build David Hunt
@ 2016-06-10 15:16               ` David Hunt
  2016-06-10 15:16                 ` [PATCH v9 1/3] mempool: support external mempool operations David Hunt
                                   ` (3 more replies)
  3 siblings, 4 replies; 238+ messages in thread
From: David Hunt @ 2016-06-10 15:16 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain

Here's the latest version of the External Mempool Manager patchset.
It's re-based on top of the latest head as of 09/6/2016, including
Olivier's 35-part patch series on mempool re-org [1]

[1] http://dpdk.org/ml/archives/dev/2016-May/039229.html

v9 changes:

 * added a check for NULL alloc in rte_mempool_ops_register
 * rte_mempool_alloc_t now returns int instead of void*
 * fixed some comment typo's
 * removed some unneeded typecasts
 * changed a return NULL to return -EEXIST in rte_mempool_ops_register
 * fixed rte_mempool_version.map file so builds ok as shared libs
 * moved flags check from rte_mempool_create_empty to rte_mempool_create

v8 changes:

 * merged first three patches in the series into one.
 * changed parameters to ops callback to all be rte_mempool pointer
   rather than than pointer to opaque data or uint64.
 * comment fixes.
 * fixed parameter to _free function (was inconsistent).
 * changed MEMPOOL_F_RING_CREATED to MEMPOOL_F_POOL_CREATED

v7 changes:

 * Changed rte_mempool_handler_table to rte_mempool_ops_table
 * Changed hander_idx to ops_index in rte_mempool struct
 * Reworked comments in rte_mempool.h around ops functions
 * Changed rte_mempool_hander.c to rte_mempool_ops.c
 * Changed all functions containing _handler_ to _ops_
 * Now there is no mention of 'handler' left
 * Other small changes out of review of mailing list

v6 changes:

 * Moved the flags handling from rte_mempool_create_empty to
   rte_mempool_create, as it's only there for backward compatibility
 * Various comment additions and cleanup
 * Renamed rte_mempool_handler to rte_mempool_ops
 * Added a union for *pool and u64 pool_id in struct rte_mempool
 * split the original patch into a few parts for easier review.
 * rename functions with _ext_ to _ops_.
 * addressed review comments
 * renamed put and get functions to enqueue and dequeue
 * changed occurences of rte_mempool_ops to const, as they
   contain function pointers (security)
 * split out the default external mempool handler into a separate
   patch for easier review

v5 changes:
 * rebasing, as it is dependent on another patch series [1]

v4 changes (Olivier Matz):
 * remove the rte_mempool_create_ext() function. To change the handler, the
   user has to do the following:
   - mp = rte_mempool_create_empty()
   - rte_mempool_set_handler(mp, "my_handler")
   - rte_mempool_populate_default(mp)
   This avoids to add another function with more than 10 arguments, duplicating
   the doxygen comments
 * change the api of rte_mempool_alloc_t: only the mempool pointer is required
   as all information is available in it
 * change the api of rte_mempool_free_t: remove return value
 * move inline wrapper functions from the .c to the .h (else they won't be
   inlined). This implies to have one header file (rte_mempool.h), or it
   would have generate cross dependencies issues.
 * remove now unused MEMPOOL_F_INT_HANDLER (note: it was misused anyway due
   to the use of && instead of &)
 * fix build in debug mode (__MEMPOOL_STAT_ADD(mp, put_pool, n) remaining)
 * fix build with shared libraries (global handler has to be declared in
   the .map file)
 * rationalize #include order
 * remove unused function rte_mempool_get_handler_name()
 * rename some structures, fields, functions
 * remove the static in front of rte_tailq_elem rte_mempool_tailq (comment
   from Yuanhan)
 * test the ext mempool handler in the same file than standard mempool tests,
   avoiding to duplicate the code
 * rework the custom handler in mempool_test
 * rework a bit the patch selecting default mbuf pool handler
 * fix some doxygen comments

v3 changes:
 * simplified the file layout, renamed to rte_mempool_handler.[hc]
 * moved the default handlers into rte_mempool_default.c
 * moved the example handler out into app/test/test_ext_mempool.c
 * removed is_mc/is_mp change, slight perf degredation on sp cached operation
 * removed stack hanler, may re-introduce at a later date
 * Changes out of code reviews

v2 changes:
 * There was a lot of duplicate code between rte_mempool_xmem_create and
   rte_mempool_create_ext. This has now been refactored and is now
   hopefully cleaner.
 * The RTE_NEXT_ABI define is now used to allow building of the library
   in a format that is compatible with binaries built against previous
   versions of DPDK.
 * Changes out of code reviews. Hopefully I've got most of them included.

The External Mempool Manager is an extension to the mempool API that allows
users to add and use an external mempool manager, which allows external memory
subsystems such as external hardware memory management systems and software
based memory allocators to be used with DPDK.

The existing API to the internal DPDK mempool manager will remain unchanged
and will be backward compatible. However, there will be an ABI breakage, as
the mempool struct is changing. These changes are all contained withing
RTE_NEXT_ABI defs, and the current or next code can be changed with
the CONFIG_RTE_NEXT_ABI config setting

There are two aspects to external mempool manager.
  1. Adding the code for your new mempool operations (ops). This is
     achieved by adding a new mempool ops source file into the
     librte_mempool library, and using the REGISTER_MEMPOOL_HANDLER macro.
  2. Using the new API to call rte_mempool_create_empty and
     rte_mempool_set_ops to create a new mempool
     using the name parameter to identify which ops to use.

New API calls added
 1. A new rte_mempool_create_empty() function
 2. rte_mempool_set_ops_byname() which sets the mempool's ops (functions)
 3. An rte_mempool_populate_default() and rte_mempool_populate_anon() functions
    which populates the mempool using the relevant ops

Several external mempool managers may be used in the same application. A new
mempool can then be created by using the new 'create' function, providing the
mempool ops struct name to point the mempool to the relevant mempool manager
callback structure.

The old 'create' function can still be called by legacy programs, and will
internally work out the mempool handle based on the flags provided (single
producer, single consumer, etc). By default handles are created internally to
implement the built-in DPDK mempool manager and mempool types.

The external mempool manager needs to provide the following functions.
 1. alloc     - allocates the mempool memory, and adds each object onto a ring
 2. put       - puts an object back into the mempool once an application has
                finished with it
 3. get       - gets an object from the mempool for use by the application
 4. get_count - gets the number of available objects in the mempool
 5. free      - frees the mempool memory

Every time a get/put/get_count is called from the application/PMD, the
callback for that mempool is called. These functions are in the fastpath,
and any unoptimised ops may limit performance.

The new APIs are as follows:

1. rte_mempool_create_empty

struct rte_mempool *
rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
    unsigned cache_size, unsigned private_data_size,
    int socket_id, unsigned flags);

2. rte_mempool_set_ops_byname()

int
rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name);

3. rte_mempool_populate_default()

int rte_mempool_populate_default(struct rte_mempool *mp);

4. rte_mempool_populate_anon()

int rte_mempool_populate_anon(struct rte_mempool *mp);

Please see rte_mempool.h for further information on the parameters.


The important thing to note is that the mempool ops struct is passed by name
to rte_mempool_set_ops_byname, which looks through the ops struct array to
get the ops_index, which is then stored in the rte_memool structure. This
allow multiple processes to use the same mempool, as the function pointers
are accessed via ops index.

The mempool ops structure contains callbacks to the implementation of
the ops function, and is set up for registration as follows:

static const struct rte_mempool_ops ops_sp_mc = {
    .name = "ring_sp_mc",
    .alloc = rte_mempool_common_ring_alloc,
    .put = common_ring_sp_put,
    .get = common_ring_mc_get,
    .get_count = common_ring_get_count,
    .free = common_ring_free,
};

And then the following macro will register the ops in the array of ops
structures

REGISTER_MEMPOOL_OPS(ops_mp_mc);

For an example of API usage, please see app/test/test_mempool.c, which
implements a rudimentary "custom_handler" mempool manager using simple mallocs
for each mempool object. This file also contains the callbacks and self
registration for the new handler.

David Hunt (2):
  mempool: support external mempool operations
  mbuf: make default mempool ops configurable at build

Olivier Matz (1):
  app/test: test external mempool manager

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v9 1/3] mempool: support external mempool operations
  2016-06-10 15:16               ` [PATCH v9 0/3] mempool: add external mempool manager David Hunt
@ 2016-06-10 15:16                 ` David Hunt
  2016-06-13 12:16                   ` Olivier Matz
  2016-06-10 15:16                 ` [PATCH v9 2/3] app/test: test external mempool manager David Hunt
                                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 238+ messages in thread
From: David Hunt @ 2016-06-10 15:16 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain, David Hunt

Until now, the objects stored in a mempool were internally stored in a
ring. This patch introduces the possibility to register external handlers
replacing the ring.

The default behavior remains unchanged, but calling the new function
rte_mempool_set_handler() right after rte_mempool_create_empty() allows
the user to change the handler that will be used when populating
the mempool.

This patch also adds a set of default ops (function callbacks) based
on rte_ring.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
 app/test/test_mempool_perf.c               |   1 -
 lib/librte_mempool/Makefile                |   2 +
 lib/librte_mempool/rte_mempool.c           |  72 ++++-----
 lib/librte_mempool/rte_mempool.h           | 248 ++++++++++++++++++++++++++---
 lib/librte_mempool/rte_mempool_default.c   | 159 ++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c       | 150 +++++++++++++++++
 lib/librte_mempool/rte_mempool_version.map |   4 +
 7 files changed, 570 insertions(+), 66 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_default.c
 create mode 100644 lib/librte_mempool/rte_mempool_ops.c

diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index cdc02a0..091c1df 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
 							   n_get_bulk);
 				if (unlikely(ret < 0)) {
 					rte_mempool_dump(stdout, mp);
-					rte_ring_dump(stdout, mp->ring);
 					/* in this case, objects are lost... */
 					return -1;
 				}
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 43423e0..8cac29b 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -42,6 +42,8 @@ LIBABIVER := 2
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
 # install includes
 SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
 
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index b54de43..e2ef196 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -148,7 +148,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, phys_addr_t physaddr)
 #endif
 
 	/* enqueue in ring */
-	rte_ring_sp_enqueue(mp->ring, obj);
+	rte_mempool_ops_enqueue_bulk(mp, &obj, 1);
 }
 
 /* call obj_cb() for each mempool element */
@@ -303,40 +303,6 @@ rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	return (size_t)paddr_idx << pg_shift;
 }
 
-/* create the internal ring */
-static int
-rte_mempool_ring_create(struct rte_mempool *mp)
-{
-	int rg_flags = 0, ret;
-	char rg_name[RTE_RING_NAMESIZE];
-	struct rte_ring *r;
-
-	ret = snprintf(rg_name, sizeof(rg_name),
-		RTE_MEMPOOL_MZ_FORMAT, mp->name);
-	if (ret < 0 || ret >= (int)sizeof(rg_name))
-		return -ENAMETOOLONG;
-
-	/* ring flags */
-	if (mp->flags & MEMPOOL_F_SP_PUT)
-		rg_flags |= RING_F_SP_ENQ;
-	if (mp->flags & MEMPOOL_F_SC_GET)
-		rg_flags |= RING_F_SC_DEQ;
-
-	/* Allocate the ring that will be used to store objects.
-	 * Ring functions will return appropriate errors if we are
-	 * running as a secondary process etc., so no checks made
-	 * in this function for that condition.
-	 */
-	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
-		mp->socket_id, rg_flags);
-	if (r == NULL)
-		return -rte_errno;
-
-	mp->ring = r;
-	mp->flags |= MEMPOOL_F_RING_CREATED;
-	return 0;
-}
-
 /* free a memchunk allocated with rte_memzone_reserve() */
 static void
 rte_mempool_memchunk_mz_free(__rte_unused struct rte_mempool_memhdr *memhdr,
@@ -354,7 +320,7 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
 	void *elt;
 
 	while (!STAILQ_EMPTY(&mp->elt_list)) {
-		rte_ring_sc_dequeue(mp->ring, &elt);
+		rte_mempool_ops_dequeue_bulk(mp, &elt, 1);
 		(void)elt;
 		STAILQ_REMOVE_HEAD(&mp->elt_list, next);
 		mp->populated_size--;
@@ -386,10 +352,14 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	int ret;
 
 	/* create the internal ring if not already done */
-	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
-		ret = rte_mempool_ring_create(mp);
-		if (ret < 0)
-			return ret;
+	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
+		rte_errno = 0;
+		ret = rte_mempool_ops_alloc(mp);
+		if (ret != 0) {
+			if (rte_errno == 0)
+				return -EINVAL;
+			return -rte_errno;
+		}
 	}
 
 	/* mempool is already populated */
@@ -703,7 +673,7 @@ rte_mempool_free(struct rte_mempool *mp)
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
 
 	rte_mempool_free_memchunks(mp);
-	rte_ring_free(mp->ring);
+	rte_mempool_ops_free(mp);
 	rte_memzone_free(mp->mz);
 }
 
@@ -815,6 +785,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 		RTE_PTR_ADD(mp, MEMPOOL_HEADER_SIZE(mp, 0));
 
 	te->data = mp;
+
 	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
 	TAILQ_INSERT_TAIL(mempool_list, te, next);
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
@@ -844,6 +815,19 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
 	if (mp == NULL)
 		return NULL;
 
+	/*
+	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
+	 * set the correct index into the table of ops structs.
+	 */
+	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
+		rte_mempool_set_ops_byname(mp, "ring_sp_sc");
+	else if (flags & MEMPOOL_F_SP_PUT)
+		rte_mempool_set_ops_byname(mp, "ring_sp_mc");
+	else if (flags & MEMPOOL_F_SC_GET)
+		rte_mempool_set_ops_byname(mp, "ring_mp_sc");
+	else
+		rte_mempool_set_ops_byname(mp, "ring_mp_mc");
+
 	/* call the mempool priv initializer */
 	if (mp_init)
 		mp_init(mp, mp_init_arg);
@@ -930,7 +914,7 @@ rte_mempool_count(const struct rte_mempool *mp)
 	unsigned count;
 	unsigned lcore_id;
 
-	count = rte_ring_count(mp->ring);
+	count = rte_mempool_ops_get_count(mp);
 
 	if (mp->cache_size == 0)
 		return count;
@@ -1123,7 +1107,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 
 	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
 	fprintf(f, "  flags=%x\n", mp->flags);
-	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
+	fprintf(f, "  pool=%p\n", mp->pool_data);
 	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->mz->phys_addr);
 	fprintf(f, "  nb_mem_chunks=%u\n", mp->nb_mem_chunks);
 	fprintf(f, "  size=%"PRIu32"\n", mp->size);
@@ -1144,7 +1128,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 	}
 
 	cache_count = rte_mempool_dump_cache(f, mp);
-	common_count = rte_ring_count(mp->ring);
+	common_count = rte_mempool_ops_get_count(mp);
 	if ((cache_count + common_count) > mp->size)
 		common_count = mp->size - cache_count;
 	fprintf(f, "  common_pool_count=%u\n", common_count);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 60339bd..80e0d1c 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -67,6 +67,7 @@
 #include <inttypes.h>
 #include <sys/queue.h>
 
+#include <rte_spinlock.h>
 #include <rte_log.h>
 #include <rte_debug.h>
 #include <rte_lcore.h>
@@ -203,10 +204,13 @@ struct rte_mempool_memhdr {
  */
 struct rte_mempool {
 	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
-	struct rte_ring *ring;           /**< Ring to store objects. */
+	union {
+		void *pool_data;         /**< Ring or pool to store objects */
+		uint64_t pool_id;        /**< External mempool identifier */
+	};
 	const struct rte_memzone *mz;    /**< Memzone where pool is allocated */
 	int flags;                       /**< Flags of the mempool. */
-	int socket_id;                   /**< Socket id passed at mempool creation. */
+	int socket_id;                   /**< Socket id passed at create */
 	uint32_t size;                   /**< Max size of the mempool. */
 	uint32_t cache_size;             /**< Size of per-lcore local cache. */
 	uint32_t cache_flushthresh;
@@ -217,6 +221,14 @@ struct rte_mempool {
 	uint32_t trailer_size;           /**< Size of trailer (after elt). */
 
 	unsigned private_data_size;      /**< Size of private data. */
+	/**
+	 * Index into rte_mempool_ops_table array of mempool ops
+	 * structs, which contain callback function pointers.
+	 * We're using an index here rather than pointers to the callbacks
+	 * to facilitate any secondary processes that may want to use
+	 * this mempool.
+	 */
+	int32_t ops_index;
 
 	struct rte_mempool_cache *local_cache; /**< Per-lcore local cache */
 
@@ -235,7 +247,7 @@ struct rte_mempool {
 #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
 #define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
-#define MEMPOOL_F_RING_CREATED   0x0010 /**< Internal: ring is created */
+#define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created */
 #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
 
 /**
@@ -325,6 +337,211 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 #define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
 #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
 
+#define RTE_MEMPOOL_OPS_NAMESIZE 32 /**< Max length of ops struct name. */
+
+/**
+ * Prototype for implementation specific data provisioning function.
+ *
+ * The function should provide the implementation specific memory for
+ * for use by the other mempool ops functions in a given mempool ops struct.
+ * E.g. the default ops provides an instance of the rte_ring for this purpose.
+ * it will most likely point to a different type of data structure, and
+ * will be transparent to the application programmer.
+ */
+typedef int (*rte_mempool_alloc_t)(struct rte_mempool *mp);
+
+/**
+ * Free the opaque private data pointed to by mp->pool_data pointer.
+ */
+typedef void (*rte_mempool_free_t)(struct rte_mempool *mp);
+
+/**
+ * Put an object in the external pool.
+ */
+typedef int (*rte_mempool_put_t)(struct rte_mempool *mp,
+		void * const *obj_table, unsigned int n);
+
+/**
+ * Get an object from the external pool.
+ */
+typedef int (*rte_mempool_get_t)(struct rte_mempool *mp,
+		void **obj_table, unsigned int n);
+
+/**
+ * Return the number of available objects in the external pool.
+ */
+typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
+
+/** Structure defining mempool operations structure */
+struct rte_mempool_ops {
+	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct */
+	rte_mempool_alloc_t alloc;       /**< Allocate private data */
+	rte_mempool_free_t free;         /**< Free the external pool. */
+	rte_mempool_put_t put;           /**< Put an object. */
+	rte_mempool_get_t get;           /**< Get an object. */
+	rte_mempool_get_count get_count; /**< Get qty of available objs. */
+} __rte_cache_aligned;
+
+#define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
+
+/**
+ * Structure storing the table of registered ops structs, each of which contain
+ * the function pointers for the mempool ops functions.
+ * Each process has its own storage for this ops struct array so that
+ * the mempools can be shared across primary and secondary processes.
+ * The indices used to access the array are valid across processes, whereas
+ * any function pointers stored directly in the mempool struct would not be.
+ * This results in us simply having "ops_index" in the mempool struct.
+ */
+struct rte_mempool_ops_table {
+	rte_spinlock_t sl;     /**< Spinlock for add/delete. */
+	uint32_t num_ops;      /**< Number of used ops structs in the table. */
+	/**
+	 * Storage for all possible ops structs.
+	 */
+	struct rte_mempool_ops ops[RTE_MEMPOOL_MAX_OPS_IDX];
+} __rte_cache_aligned;
+
+/** Array of registered ops structs */
+extern struct rte_mempool_ops_table rte_mempool_ops_table;
+
+/**
+ * @internal Get the mempool ops struct from its index.
+ *
+ * @param ops_index
+ *   The index of the ops struct in the ops struct table. It must be a valid
+ *   index: (0 <= idx < num_ops).
+ * @return
+ *   The pointer to the ops struct in the table.
+ */
+static inline struct rte_mempool_ops *
+rte_mempool_ops_get(int ops_index)
+{
+	RTE_VERIFY(ops_index < RTE_MEMPOOL_MAX_OPS_IDX);
+
+	return &rte_mempool_ops_table.ops[ops_index];
+}
+
+/**
+ * @internal Wrapper for mempool_ops alloc callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   - 0: Success; successfully allocated mempool pool_data
+ *   - <0: Error; code of alloc function.
+ */
+int
+rte_mempool_ops_alloc(struct rte_mempool *mp);
+
+/**
+ * @internal Wrapper for mempool_ops get callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to get.
+ * @return
+ *   - 0: Success; got n objects.
+ *   - <0: Error; code of get function.
+ */
+static inline int
+rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
+		void **obj_table, unsigned n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->get(mp, obj_table, n);
+}
+
+/**
+ * @internal wrapper for mempool_ops put callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to put.
+ * @return
+ *   - 0: Success; n objects supplied.
+ *   - <0: Error; code of put function.
+ */
+static inline int
+rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->put(mp, obj_table, n);
+}
+
+/**
+ * @internal wrapper for mempool_ops get_count callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   The number of available objects in the external pool.
+ */
+unsigned
+rte_mempool_ops_get_count(const struct rte_mempool *mp);
+
+/**
+ * @internal wrapper for mempool_ops free callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ */
+void
+rte_mempool_ops_free(struct rte_mempool *mp);
+
+/**
+ * Set the ops of a mempool
+ *
+ * This can only be done on a mempool that is not populated, i.e. just after
+ * a call to rte_mempool_create_empty().
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param name
+ *   Name of the ops structure to use for this mempool.
+ * @return
+ *   - 0: Success; the mempool is now using the requested ops functions
+ *   - -EINVAL - Invalid ops struct name provided
+ *   - -EEXIST - mempool already has an ops struct assigned
+ */
+int
+rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name);
+
+/**
+ * Register mempool operations
+ *
+ * @param h
+ *   Pointer to and ops structure to register
+ * @return
+ *   - >=0: Success; return the index of the ops struct in the table.
+ *   - -EINVAL - some missing callbacks while registering ops struct
+ *   - -ENOSPC - the maximum number of ops structs has been reached
+ */
+int rte_mempool_ops_register(const struct rte_mempool_ops *ops);
+
+/**
+ * Macro to statically register the ops of an external mempool manager
+ * Note that the rte_mempool_ops_register fails silently here when
+ * more then RTE_MEMPOOL_MAX_OPS_IDX is registered.
+ */
+#define MEMPOOL_REGISTER_OPS(ops)					\
+	void mp_hdlr_init_##ops(void);					\
+	void __attribute__((constructor, used)) mp_hdlr_init_##ops(void)\
+	{								\
+		rte_mempool_ops_register(&ops);			\
+	}
+
 /**
  * An object callback function for mempool.
  *
@@ -774,7 +991,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 	cache->len += n;
 
 	if (cache->len >= flushthresh) {
-		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
+		rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache_size],
 				cache->len - cache_size);
 		cache->len = cache_size;
 	}
@@ -785,19 +1002,10 @@ ring_enqueue:
 
 	/* push remaining objects in ring */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	if (is_mp) {
-		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
-	else {
-		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
+	if (rte_mempool_ops_enqueue_bulk(mp, obj_table, n) < 0)
+		rte_panic("cannot put objects in mempool\n");
 #else
-	if (is_mp)
-		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
-	else
-		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
+	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);
 #endif
 }
 
@@ -945,7 +1153,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 		uint32_t req = n + (cache_size - cache->len);
 
 		/* How many do we require i.e. number to fill the cache + the request */
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
+		ret = rte_mempool_ops_dequeue_bulk(mp,
+			&cache->objs[cache->len], req);
 		if (unlikely(ret < 0)) {
 			/*
 			 * In the offchance that we are buffer constrained,
@@ -972,10 +1181,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 ring_dequeue:
 
 	/* get remaining objects from ring */
-	if (is_mc)
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
-	else
-		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
+	ret = rte_mempool_ops_dequeue_bulk(mp, obj_table, n);
 
 	if (ret < 0)
 		__MEMPOOL_STAT_ADD(mp, get_fail, n);
diff --git a/lib/librte_mempool/rte_mempool_default.c b/lib/librte_mempool/rte_mempool_default.c
new file mode 100644
index 0000000..e4aaad0
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_default.c
@@ -0,0 +1,159 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+
+static int
+common_ring_mp_put(struct rte_mempool *mp, void * const *obj_table, unsigned n)
+{
+	return rte_ring_mp_enqueue_bulk(mp->pool_data, obj_table, n);
+}
+
+static int
+common_ring_sp_put(struct rte_mempool *mp, void * const *obj_table, unsigned n)
+{
+	return rte_ring_sp_enqueue_bulk(mp->pool_data, obj_table, n);
+}
+
+static int
+common_ring_mc_get(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	return rte_ring_mc_dequeue_bulk(mp->pool_data, obj_table, n);
+}
+
+static int
+common_ring_sc_get(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	return rte_ring_sc_dequeue_bulk(mp->pool_data, obj_table, n);
+}
+
+static unsigned
+common_ring_get_count(const struct rte_mempool *mp)
+{
+	return rte_ring_count(mp->pool_data);
+}
+
+
+static int
+common_ring_alloc(struct rte_mempool *mp)
+{
+	int rg_flags = 0, ret;
+	char rg_name[RTE_RING_NAMESIZE];
+	struct rte_ring *r;
+
+	ret = snprintf(rg_name, sizeof(rg_name),
+		RTE_MEMPOOL_MZ_FORMAT, mp->name);
+	if (ret < 0 || ret >= (int)sizeof(rg_name)) {
+		rte_errno = ENAMETOOLONG;
+		return -rte_errno;
+	}
+
+	/* ring flags */
+	if (mp->flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (mp->flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	/*
+	 * Allocate the ring that will be used to store objects.
+	 * Ring functions will return appropriate errors if we are
+	 * running as a secondary process etc., so no checks made
+	 * in this function for that condition.
+	 */
+	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+		mp->socket_id, rg_flags);
+	if (r == NULL)
+		return -rte_errno;
+
+	mp->pool_data = r;
+
+	return 0;
+}
+
+static void
+common_ring_free(struct rte_mempool *mp)
+{
+	rte_ring_free(mp->pool_data);
+}
+
+/*
+ * The following 4 declarations of mempool ops structs address
+ * the need for the backward compatible mempool managers for
+ * single/multi producers and single/multi consumers as dictated by the
+ * flags provided to the rte_mempool_create function
+ */
+static const struct rte_mempool_ops ops_mp_mc = {
+	.name = "ring_mp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.put = common_ring_mp_put,
+	.get = common_ring_mc_get,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_sp_sc = {
+	.name = "ring_sp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.put = common_ring_sp_put,
+	.get = common_ring_sc_get,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_mp_sc = {
+	.name = "ring_mp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.put = common_ring_mp_put,
+	.get = common_ring_sc_get,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_sp_mc = {
+	.name = "ring_sp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.put = common_ring_sp_put,
+	.get = common_ring_mc_get,
+	.get_count = common_ring_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(ops_mp_mc);
+MEMPOOL_REGISTER_OPS(ops_sp_sc);
+MEMPOOL_REGISTER_OPS(ops_mp_sc);
+MEMPOOL_REGISTER_OPS(ops_sp_mc);
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
new file mode 100644
index 0000000..f2f7d7d
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -0,0 +1,150 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2016 6WIND S.A.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_mempool.h>
+#include <rte_errno.h>
+
+/* indirect jump table to support external memory pools */
+struct rte_mempool_ops_table rte_mempool_ops_table = {
+	.sl =  RTE_SPINLOCK_INITIALIZER,
+	.num_ops = 0
+};
+
+/* add a new ops struct in rte_mempool_ops_table, return its index */
+int
+rte_mempool_ops_register(const struct rte_mempool_ops *h)
+{
+	struct rte_mempool_ops *ops;
+	int16_t ops_index;
+
+	rte_spinlock_lock(&rte_mempool_ops_table.sl);
+
+	if (rte_mempool_ops_table.num_ops >=
+			RTE_MEMPOOL_MAX_OPS_IDX) {
+		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Maximum number of mempool ops structs exceeded\n");
+		return -ENOSPC;
+	}
+
+	if (h->alloc == NULL || h->put == NULL ||
+			h->get == NULL || h->get_count == NULL) {
+		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Missing callback while registering mempool ops\n");
+		return -EINVAL;
+	}
+
+	if (strlen(h->name) >= sizeof(ops->name) - 1) {
+		RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n",
+				__func__, h->name);
+		rte_errno = EEXIST;
+		return -EEXIST;
+	}
+
+	ops_index = rte_mempool_ops_table.num_ops++;
+	ops = &rte_mempool_ops_table.ops[ops_index];
+	snprintf(ops->name, sizeof(ops->name), "%s", h->name);
+	ops->alloc = h->alloc;
+	ops->put = h->put;
+	ops->get = h->get;
+	ops->get_count = h->get_count;
+
+	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+
+	return ops_index;
+}
+
+/* wrapper to allocate an external mempool's private (pool) data */
+int
+rte_mempool_ops_alloc(struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	if (ops->alloc == NULL)
+		return -ENOMEM;
+	return ops->alloc(mp);
+}
+
+/* wrapper to free an external pool ops */
+void
+rte_mempool_ops_free(struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	if (ops->free == NULL)
+		return;
+	return ops->free(mp);
+}
+
+/* wrapper to get available objects in an external mempool */
+unsigned int
+rte_mempool_ops_get_count(const struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->get_count(mp);
+}
+
+/* sets mempool ops previously registered by rte_mempool_ops_register */
+int
+rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)
+{
+	struct rte_mempool_ops *ops = NULL;
+	unsigned i;
+
+	/* too late, the mempool is already populated */
+	if (mp->flags & MEMPOOL_F_POOL_CREATED)
+		return -EEXIST;
+
+	for (i = 0; i < rte_mempool_ops_table.num_ops; i++) {
+		if (!strcmp(name,
+				rte_mempool_ops_table.ops[i].name)) {
+			ops = &rte_mempool_ops_table.ops[i];
+			break;
+		}
+	}
+
+	if (ops == NULL)
+		return -EINVAL;
+
+	mp->ops_index = i;
+	return 0;
+}
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index f63461b..f49c205 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -19,6 +19,8 @@ DPDK_2.0 {
 DPDK_16.7 {
 	global:
 
+	rte_mempool_ops_table;
+
 	rte_mempool_check_cookies;
 	rte_mempool_obj_iter;
 	rte_mempool_mem_iter;
@@ -29,6 +31,8 @@ DPDK_16.7 {
 	rte_mempool_populate_default;
 	rte_mempool_populate_anon;
 	rte_mempool_free;
+	rte_mempool_set_ops_byname;
+	rte_mempool_ops_register;
 
 	local: *;
 } DPDK_2.0;
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v9 2/3] app/test: test external mempool manager
  2016-06-10 15:16               ` [PATCH v9 0/3] mempool: add external mempool manager David Hunt
  2016-06-10 15:16                 ` [PATCH v9 1/3] mempool: support external mempool operations David Hunt
@ 2016-06-10 15:16                 ` David Hunt
  2016-06-10 15:16                 ` [PATCH v9 3/3] mbuf: make default mempool ops configurable at build David Hunt
  2016-06-14  9:46                 ` [PATCH v10 0/3] mempool: add external mempool manager David Hunt
  3 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-06-10 15:16 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain, David Hunt

Use a minimal custom mempool external ops and check that it also
passes basic mempool autotests.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
 app/test/test_mempool.c | 115 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 115 insertions(+)

diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index b586249..e74f5d2 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -83,6 +83,98 @@
 static rte_atomic32_t synchro;
 
 /*
+ * Simple example of custom mempool structure. Holds pointers to all the
+ * elements which are simply malloc'd in this example.
+ */
+struct custom_mempool {
+	rte_spinlock_t lock;
+	unsigned count;
+	unsigned size;
+	void *elts[];
+};
+
+/*
+ * Loop through all the element pointers and allocate a chunk of memory, then
+ * insert that memory into the ring.
+ */
+static int
+custom_mempool_alloc(struct rte_mempool *mp)
+{
+	struct custom_mempool *cm;
+
+	cm = rte_zmalloc("custom_mempool",
+		sizeof(struct custom_mempool) + mp->size * sizeof(void *), 0);
+	if (cm == NULL)
+		return -ENOMEM;
+
+	rte_spinlock_init(&cm->lock);
+	cm->count = 0;
+	cm->size = mp->size;
+	mp->pool_data = cm;
+	return 0;
+}
+
+static void
+custom_mempool_free(struct rte_mempool *mp)
+{
+	rte_free((void *)(mp->pool_data));
+}
+
+static int
+custom_mempool_put(struct rte_mempool *mp, void * const *obj_table, unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (cm->count + n > cm->size) {
+		ret = -ENOBUFS;
+	} else {
+		memcpy(&cm->elts[cm->count], obj_table, sizeof(void *) * n);
+		cm->count += n;
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+
+static int
+custom_mempool_get(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (n > cm->count) {
+		ret = -ENOENT;
+	} else {
+		cm->count -= n;
+		memcpy(obj_table, &cm->elts[cm->count], sizeof(void *) * n);
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+static unsigned
+custom_mempool_get_count(const struct rte_mempool *mp)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+
+	return cm->count;
+}
+
+static struct rte_mempool_ops mempool_ops_custom = {
+	.name = "custom_handler",
+	.alloc = custom_mempool_alloc,
+	.free = custom_mempool_free,
+	.put = custom_mempool_put,
+	.get = custom_mempool_get,
+	.get_count = custom_mempool_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(mempool_ops_custom);
+
+/*
  * save the object number in the first 4 bytes of object data. All
  * other bytes are set to 0.
  */
@@ -477,6 +569,7 @@ test_mempool(void)
 {
 	struct rte_mempool *mp_cache = NULL;
 	struct rte_mempool *mp_nocache = NULL;
+	struct rte_mempool *mp_ext = NULL;
 
 	rte_atomic32_init(&synchro);
 
@@ -505,6 +598,27 @@ test_mempool(void)
 		goto err;
 	}
 
+	/* create a mempool with an external handler */
+	mp_ext = rte_mempool_create_empty("test_ext",
+		MEMPOOL_SIZE,
+		MEMPOOL_ELT_SIZE,
+		RTE_MEMPOOL_CACHE_MAX_SIZE, 0,
+		SOCKET_ID_ANY, 0);
+
+	if (mp_ext == NULL) {
+		printf("cannot allocate mp_ext mempool\n");
+		goto err;
+	}
+	if (rte_mempool_set_ops_byname(mp_ext, "custom_handler") < 0) {
+		printf("cannot set custom handler\n");
+		goto err;
+	}
+	if (rte_mempool_populate_default(mp_ext) < 0) {
+		printf("cannot populate mp_ext mempool\n");
+		goto err;
+	}
+	rte_mempool_obj_iter(mp_ext, my_obj_init, NULL);
+
 	/* retrieve the mempool from its name */
 	if (rte_mempool_lookup("test_nocache") != mp_nocache) {
 		printf("Cannot lookup mempool from its name\n");
@@ -545,6 +659,7 @@ test_mempool(void)
 err:
 	rte_mempool_free(mp_nocache);
 	rte_mempool_free(mp_cache);
+	rte_mempool_free(mp_ext);
 	return -1;
 }
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v9 3/3] mbuf: make default mempool ops configurable at build
  2016-06-10 15:16               ` [PATCH v9 0/3] mempool: add external mempool manager David Hunt
  2016-06-10 15:16                 ` [PATCH v9 1/3] mempool: support external mempool operations David Hunt
  2016-06-10 15:16                 ` [PATCH v9 2/3] app/test: test external mempool manager David Hunt
@ 2016-06-10 15:16                 ` David Hunt
  2016-06-14  9:46                 ` [PATCH v10 0/3] mempool: add external mempool manager David Hunt
  3 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-06-10 15:16 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain, David Hunt

By default, the mempool ops used for mbuf allocations is a multi
producer and multi consumer ring. We could imagine a target (maybe some
network processors?) that provides an hardware-assisted pool
mechanism. In this case, the default configuration for this architecture
would contain a different value for RTE_MBUF_DEFAULT_MEMPOOL_OPS.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
 config/common_base         |  1 +
 lib/librte_mbuf/rte_mbuf.c | 26 ++++++++++++++++++++++----
 2 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/config/common_base b/config/common_base
index 47c26f6..899c038 100644
--- a/config/common_base
+++ b/config/common_base
@@ -394,6 +394,7 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
 #
 CONFIG_RTE_LIBRTE_MBUF=y
 CONFIG_RTE_LIBRTE_MBUF_DEBUG=n
+CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="ring_mp_mc"
 CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index eec1456..491230c 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -153,6 +153,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	unsigned cache_size, uint16_t priv_size, uint16_t data_room_size,
 	int socket_id)
 {
+	struct rte_mempool *mp;
 	struct rte_pktmbuf_pool_private mbp_priv;
 	unsigned elt_size;
 
@@ -167,10 +168,27 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	mbp_priv.mbuf_data_room_size = data_room_size;
 	mbp_priv.mbuf_priv_size = priv_size;
 
-	return rte_mempool_create(name, n, elt_size,
-		cache_size, sizeof(struct rte_pktmbuf_pool_private),
-		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
-		socket_id, 0);
+	mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
+		 sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
+	if (mp == NULL)
+		return NULL;
+
+	rte_errno = rte_mempool_set_ops_byname(mp,
+			RTE_MBUF_DEFAULT_MEMPOOL_OPS);
+	if (rte_errno != 0) {
+		RTE_LOG(ERR, MBUF, "error setting mempool handler\n");
+		return NULL;
+	}
+	rte_pktmbuf_pool_init(mp, &mbp_priv);
+
+	if (rte_mempool_populate_default(mp) < 0) {
+		rte_mempool_free(mp);
+		return NULL;
+	}
+
+	rte_mempool_obj_iter(mp, rte_pktmbuf_init, NULL);
+
+	return mp;
 }
 
 /* do some sanity checks on a mbuf: panic if it fails */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* Re: [PATCH v9 1/3] mempool: support external mempool operations
  2016-06-10 15:16                 ` [PATCH v9 1/3] mempool: support external mempool operations David Hunt
@ 2016-06-13 12:16                   ` Olivier Matz
  2016-06-13 13:46                     ` Hunt, David
  0 siblings, 1 reply; 238+ messages in thread
From: Olivier Matz @ 2016-06-13 12:16 UTC (permalink / raw)
  To: David Hunt, dev; +Cc: viktorin, jerin.jacob, shreyansh.jain

Hi David,

Some comments below.

On 06/10/2016 05:16 PM, David Hunt wrote:
> Until now, the objects stored in a mempool were internally stored in a
> ring. This patch introduces the possibility to register external handlers
> replacing the ring.
> 
> The default behavior remains unchanged, but calling the new function
> rte_mempool_set_handler() right after rte_mempool_create_empty() allows
> the user to change the handler that will be used when populating
> the mempool.
> 
> This patch also adds a set of default ops (function callbacks) based
> on rte_ring.
> 
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: David Hunt <david.hunt@intel.com>
> 

> ...
> @@ -386,10 +352,14 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
>  	int ret;
>  
>  	/* create the internal ring if not already done */
> -	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
> -		ret = rte_mempool_ring_create(mp);
> -		if (ret < 0)
> -			return ret;
> +	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
> +		rte_errno = 0;
> +		ret = rte_mempool_ops_alloc(mp);
> +		if (ret != 0) {
> +			if (rte_errno == 0)
> +				return -EINVAL;
> +			return -rte_errno;
> +		}
>  	}

The rte_errno should be removed. Just return the error code from
rte_mempool_ops_alloc() on failure.

> +/** Structure defining mempool operations structure */
> +struct rte_mempool_ops {
> +	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct */
> +	rte_mempool_alloc_t alloc;       /**< Allocate private data */
> +	rte_mempool_free_t free;         /**< Free the external pool. */
> +	rte_mempool_put_t put;           /**< Put an object. */
> +	rte_mempool_get_t get;           /**< Get an object. */
> +	rte_mempool_get_count get_count; /**< Get qty of available objs. */
> +} __rte_cache_aligned;
> +

Sorry, I missed that in the previous reviews, but since the get/put
functions have been renamed in dequeue/enqueue, I think the same change
should also be done here.


> +/**
> + * Prototype for implementation specific data provisioning function.
> + *
> + * The function should provide the implementation specific memory for
> + * for use by the other mempool ops functions in a given mempool ops struct.
> + * E.g. the default ops provides an instance of the rte_ring for this purpose.
> + * it will most likely point to a different type of data structure, and
> + * will be transparent to the application programmer.
> + */
> +typedef int (*rte_mempool_alloc_t)(struct rte_mempool *mp);

A comment saying that this function should set mp->pool_data
would be nice here, I think.


> +/* wrapper to allocate an external mempool's private (pool) data */
> +int
> +rte_mempool_ops_alloc(struct rte_mempool *mp)
> +{
> +	struct rte_mempool_ops *ops;
> +
> +	ops = rte_mempool_ops_get(mp->ops_index);
> +	if (ops->alloc == NULL)
> +		return -ENOMEM;
> +	return ops->alloc(mp);
> +}

Now that we check that ops->alloc != NULL in the register function,
I wonder if we should keep this test or not. Yes, it doesn't hurt,
but for consistency with the other functions (get/put/get_count),
we may remove it.

This would be a good thing because it would prevent any confusion
with rte_mempool_ops_get(), which returns a pointer to the ops struct
(and has nothing to do with ops->get()).



Regards,
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v9 1/3] mempool: support external mempool operations
  2016-06-13 12:16                   ` Olivier Matz
@ 2016-06-13 13:46                     ` Hunt, David
  0 siblings, 0 replies; 238+ messages in thread
From: Hunt, David @ 2016-06-13 13:46 UTC (permalink / raw)
  To: Olivier Matz, dev; +Cc: viktorin, jerin.jacob, shreyansh.jain

Hi Olivier,

On 13/6/2016 1:16 PM, Olivier Matz wrote:
> Hi David,
>
> Some comments below.
>
> On 06/10/2016 05:16 PM, David Hunt wrote:
>> Until now, the objects stored in a mempool were internally stored in a
>> ring. This patch introduces the possibility to register external handlers
>> replacing the ring.
>>
>> The default behavior remains unchanged, but calling the new function
>> rte_mempool_set_handler() right after rte_mempool_create_empty() allows
>> the user to change the handler that will be used when populating
>> the mempool.
>>
>> This patch also adds a set of default ops (function callbacks) based
>> on rte_ring.
>>
>> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
>> Signed-off-by: David Hunt <david.hunt@intel.com>
>>
>> ...
>> @@ -386,10 +352,14 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
>>   	int ret;
>>   
>>   	/* create the internal ring if not already done */
>> -	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
>> -		ret = rte_mempool_ring_create(mp);
>> -		if (ret < 0)
>> -			return ret;
>> +	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
>> +		rte_errno = 0;
>> +		ret = rte_mempool_ops_alloc(mp);
>> +		if (ret != 0) {
>> +			if (rte_errno == 0)
>> +				return -EINVAL;
>> +			return -rte_errno;
>> +		}
>>   	}
> The rte_errno should be removed. Just return the error code from
> rte_mempool_ops_alloc() on failure.

Done.

>> +/** Structure defining mempool operations structure */
>> +struct rte_mempool_ops {
>> +	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct */
>> +	rte_mempool_alloc_t alloc;       /**< Allocate private data */
>> +	rte_mempool_free_t free;         /**< Free the external pool. */
>> +	rte_mempool_put_t put;           /**< Put an object. */
>> +	rte_mempool_get_t get;           /**< Get an object. */
>> +	rte_mempool_get_count get_count; /**< Get qty of available objs. */
>> +} __rte_cache_aligned;
>> +
> Sorry, I missed that in the previous reviews, but since the get/put
> functions have been renamed in dequeue/enqueue, I think the same change
> should also be done here.

Done. I also changed the common_ring_[sc|mc]_put and 
common_ring_[sc|mc]_get.
I didn't go as far as changing the rte_mempool_put or 
rte_mempool_put_bulk calls,
as they are used in several drivers and apps.

>
>> +/**
>> + * Prototype for implementation specific data provisioning function.
>> + *
>> + * The function should provide the implementation specific memory for
>> + * for use by the other mempool ops functions in a given mempool ops struct.
>> + * E.g. the default ops provides an instance of the rte_ring for this purpose.
>> + * it will most likely point to a different type of data structure, and
>> + * will be transparent to the application programmer.
>> + */
>> +typedef int (*rte_mempool_alloc_t)(struct rte_mempool *mp);
> A comment saying that this function should set mp->pool_data
> would be nice here, I think.

Done

>> +/* wrapper to allocate an external mempool's private (pool) data */
>> +int
>> +rte_mempool_ops_alloc(struct rte_mempool *mp)
>> +{
>> +	struct rte_mempool_ops *ops;
>> +
>> +	ops = rte_mempool_ops_get(mp->ops_index);
>> +	if (ops->alloc == NULL)
>> +		return -ENOMEM;
>> +	return ops->alloc(mp);
>> +}
> Now that we check that ops->alloc != NULL in the register function,
> I wonder if we should keep this test or not. Yes, it doesn't hurt,
> but for consistency with the other functions (get/put/get_count),
> we may remove it.
>
> This would be a good thing because it would prevent any confusion
> with rte_mempool_ops_get(), which returns a pointer to the ops struct
> (and has nothing to do with ops->get()).

Done.

>
> Regards,
> Olivier

Thanks,
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v10 0/3] mempool: add external mempool manager
  2016-06-10 15:16               ` [PATCH v9 0/3] mempool: add external mempool manager David Hunt
                                   ` (2 preceding siblings ...)
  2016-06-10 15:16                 ` [PATCH v9 3/3] mbuf: make default mempool ops configurable at build David Hunt
@ 2016-06-14  9:46                 ` David Hunt
  2016-06-14  9:46                   ` [PATCH v10 1/3] mempool: support external mempool operations David Hunt
                                     ` (4 more replies)
  3 siblings, 5 replies; 238+ messages in thread
From: David Hunt @ 2016-06-14  9:46 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain

Here's the latest version of the External Mempool Manager patchset.
It's re-based on top of the latest head as of 14/6/2016, including
Olivier's 35-part patch series on mempool re-org [1]

[1] http://dpdk.org/ml/archives/dev/2016-May/039229.html

v10 changes:

 * changed the _put/_get op names to _enqueue/_dequeue to be consistent
   with the function names
 * some rte_errno cleanup
 * comment tweaks about when to set pool_data
 * removed an un-needed check for ops->alloc == NULL

v9 changes:

 * added a check for NULL alloc in rte_mempool_ops_register
 * rte_mempool_alloc_t now returns int instead of void*
 * fixed some comment typo's
 * removed some unneeded typecasts
 * changed a return NULL to return -EEXIST in rte_mempool_ops_register
 * fixed rte_mempool_version.map file so builds ok as shared libs
 * moved flags check from rte_mempool_create_empty to rte_mempool_create

v8 changes:

 * merged first three patches in the series into one.
 * changed parameters to ops callback to all be rte_mempool pointer
   rather than than pointer to opaque data or uint64.
 * comment fixes.
 * fixed parameter to _free function (was inconsistent).
 * changed MEMPOOL_F_RING_CREATED to MEMPOOL_F_POOL_CREATED

v7 changes:

 * Changed rte_mempool_handler_table to rte_mempool_ops_table
 * Changed hander_idx to ops_index in rte_mempool struct
 * Reworked comments in rte_mempool.h around ops functions
 * Changed rte_mempool_hander.c to rte_mempool_ops.c
 * Changed all functions containing _handler_ to _ops_
 * Now there is no mention of 'handler' left
 * Other small changes out of review of mailing list

v6 changes:

 * Moved the flags handling from rte_mempool_create_empty to
   rte_mempool_create, as it's only there for backward compatibility
 * Various comment additions and cleanup
 * Renamed rte_mempool_handler to rte_mempool_ops
 * Added a union for *pool and u64 pool_id in struct rte_mempool
 * split the original patch into a few parts for easier review.
 * rename functions with _ext_ to _ops_.
 * addressed review comments
 * renamed put and get functions to enqueue and dequeue
 * changed occurences of rte_mempool_ops to const, as they
   contain function pointers (security)
 * split out the default external mempool handler into a separate
   patch for easier review

v5 changes:
 * rebasing, as it is dependent on another patch series [1]

v4 changes (Olivier Matz):
 * remove the rte_mempool_create_ext() function. To change the handler, the
   user has to do the following:
   - mp = rte_mempool_create_empty()
   - rte_mempool_set_handler(mp, "my_handler")
   - rte_mempool_populate_default(mp)
   This avoids to add another function with more than 10 arguments, duplicating
   the doxygen comments
 * change the api of rte_mempool_alloc_t: only the mempool pointer is required
   as all information is available in it
 * change the api of rte_mempool_free_t: remove return value
 * move inline wrapper functions from the .c to the .h (else they won't be
   inlined). This implies to have one header file (rte_mempool.h), or it
   would have generate cross dependencies issues.
 * remove now unused MEMPOOL_F_INT_HANDLER (note: it was misused anyway due
   to the use of && instead of &)
 * fix build in debug mode (__MEMPOOL_STAT_ADD(mp, put_pool, n) remaining)
 * fix build with shared libraries (global handler has to be declared in
   the .map file)
 * rationalize #include order
 * remove unused function rte_mempool_get_handler_name()
 * rename some structures, fields, functions
 * remove the static in front of rte_tailq_elem rte_mempool_tailq (comment
   from Yuanhan)
 * test the ext mempool handler in the same file than standard mempool tests,
   avoiding to duplicate the code
 * rework the custom handler in mempool_test
 * rework a bit the patch selecting default mbuf pool handler
 * fix some doxygen comments

v3 changes:
 * simplified the file layout, renamed to rte_mempool_handler.[hc]
 * moved the default handlers into rte_mempool_default.c
 * moved the example handler out into app/test/test_ext_mempool.c
 * removed is_mc/is_mp change, slight perf degredation on sp cached operation
 * removed stack hanler, may re-introduce at a later date
 * Changes out of code reviews

v2 changes:
 * There was a lot of duplicate code between rte_mempool_xmem_create and
   rte_mempool_create_ext. This has now been refactored and is now
   hopefully cleaner.
 * The RTE_NEXT_ABI define is now used to allow building of the library
   in a format that is compatible with binaries built against previous
   versions of DPDK.
 * Changes out of code reviews. Hopefully I've got most of them included.

The External Mempool Manager is an extension to the mempool API that allows
users to add and use an external mempool manager, which allows external memory
subsystems such as external hardware memory management systems and software
based memory allocators to be used with DPDK.

The existing API to the internal DPDK mempool manager will remain unchanged
and will be backward compatible. However, there will be an ABI breakage, as
the mempool struct is changing. These changes are all contained withing
RTE_NEXT_ABI defs, and the current or next code can be changed with
the CONFIG_RTE_NEXT_ABI config setting

There are two aspects to external mempool manager.
  1. Adding the code for your new mempool operations (ops). This is
     achieved by adding a new mempool ops source file into the
     librte_mempool library, and using the REGISTER_MEMPOOL_HANDLER macro.
  2. Using the new API to call rte_mempool_create_empty and
     rte_mempool_set_ops to create a new mempool
     using the name parameter to identify which ops to use.

New API calls added
 1. A new rte_mempool_create_empty() function
 2. rte_mempool_set_ops_byname() which sets the mempool's ops (functions)
 3. An rte_mempool_populate_default() and rte_mempool_populate_anon() functions
    which populates the mempool using the relevant ops

Several external mempool managers may be used in the same application. A new
mempool can then be created by using the new 'create' function, providing the
mempool ops struct name to point the mempool to the relevant mempool manager
callback structure.

The old 'create' function can still be called by legacy programs, and will
internally work out the mempool handle based on the flags provided (single
producer, single consumer, etc). By default handles are created internally to
implement the built-in DPDK mempool manager and mempool types.

The external mempool manager needs to provide the following functions.
 1. alloc     - allocates the mempool memory, and adds each object onto a ring
 2. enqueue   - puts an object back into the mempool once an application has
                finished with it
 3. dequeue   - gets an object from the mempool for use by the application
 4. get_count - gets the number of available objects in the mempool
 5. free      - frees the mempool memory

Every time an enqueue/dequeue/get_count is called from the application/PMD,
the callback for that mempool is called. These functions are in the fastpath,
and any unoptimised ops may limit performance.

The new APIs are as follows:

1. rte_mempool_create_empty

struct rte_mempool *
rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
    unsigned cache_size, unsigned private_data_size,
    int socket_id, unsigned flags);

2. rte_mempool_set_ops_byname()

int
rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name);

3. rte_mempool_populate_default()

int rte_mempool_populate_default(struct rte_mempool *mp);

4. rte_mempool_populate_anon()

int rte_mempool_populate_anon(struct rte_mempool *mp);

Please see rte_mempool.h for further information on the parameters.


The important thing to note is that the mempool ops struct is passed by name
to rte_mempool_set_ops_byname, which looks through the ops struct array to
get the ops_index, which is then stored in the rte_memool structure. This
allow multiple processes to use the same mempool, as the function pointers
are accessed via ops index.

The mempool ops structure contains callbacks to the implementation of
the ops function, and is set up for registration as follows:

static const struct rte_mempool_ops ops_sp_mc = {
    .name = "ring_sp_mc",
    .alloc = rte_mempool_common_ring_alloc,
    .enqueue = common_ring_sp_enqueue,
    .dequeue = common_ring_mc_dequeue,
    .get_count = common_ring_get_count,
    .free = common_ring_free,
};

And then the following macro will register the ops in the array of ops
structures

REGISTER_MEMPOOL_OPS(ops_mp_mc);

For an example of API usage, please see app/test/test_mempool.c, which
implements a rudimentary "custom_handler" mempool manager using simple mallocs
for each mempool object. This file also contains the callbacks and self
registration for the new handler.

David Hunt (2):
  mempool: support external mempool operations
  mbuf: make default mempool ops configurable at build

Olivier Matz (1):
  app/test: test external mempool manager

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v10 1/3] mempool: support external mempool operations
  2016-06-14  9:46                 ` [PATCH v10 0/3] mempool: add external mempool manager David Hunt
@ 2016-06-14  9:46                   ` David Hunt
  2016-06-14 11:38                     ` Shreyansh Jain
  2016-06-14 12:55                     ` Thomas Monjalon
  2016-06-14  9:46                   ` [PATCH v10 2/3] app/test: test external mempool manager David Hunt
                                     ` (3 subsequent siblings)
  4 siblings, 2 replies; 238+ messages in thread
From: David Hunt @ 2016-06-14  9:46 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain, David Hunt

Until now, the objects stored in a mempool were internally stored in a
ring. This patch introduces the possibility to register external handlers
replacing the ring.

The default behavior remains unchanged, but calling the new function
rte_mempool_set_handler() right after rte_mempool_create_empty() allows
the user to change the handler that will be used when populating
the mempool.

This patch also adds a set of default ops (function callbacks) based
on rte_ring.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
 app/test/test_mempool_perf.c               |   1 -
 lib/librte_mempool/Makefile                |   2 +
 lib/librte_mempool/rte_mempool.c           |  66 +++-----
 lib/librte_mempool/rte_mempool.h           | 249 ++++++++++++++++++++++++++---
 lib/librte_mempool/rte_mempool_default.c   | 161 +++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c       | 148 +++++++++++++++++
 lib/librte_mempool/rte_mempool_version.map |   4 +
 7 files changed, 566 insertions(+), 65 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_default.c
 create mode 100644 lib/librte_mempool/rte_mempool_ops.c

diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index c5e3576..c5f8455 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
 							   n_get_bulk);
 				if (unlikely(ret < 0)) {
 					rte_mempool_dump(stdout, mp);
-					rte_ring_dump(stdout, mp->ring);
 					/* in this case, objects are lost... */
 					return -1;
 				}
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 43423e0..8cac29b 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -42,6 +42,8 @@ LIBABIVER := 2
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
 # install includes
 SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
 
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 22a5645..ac40cb3 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -148,7 +148,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, phys_addr_t physaddr)
 #endif
 
 	/* enqueue in ring */
-	rte_ring_sp_enqueue(mp->ring, obj);
+	rte_mempool_ops_enqueue_bulk(mp, &obj, 1);
 }
 
 /* call obj_cb() for each mempool element */
@@ -303,40 +303,6 @@ rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	return (size_t)paddr_idx << pg_shift;
 }
 
-/* create the internal ring */
-static int
-rte_mempool_ring_create(struct rte_mempool *mp)
-{
-	int rg_flags = 0, ret;
-	char rg_name[RTE_RING_NAMESIZE];
-	struct rte_ring *r;
-
-	ret = snprintf(rg_name, sizeof(rg_name),
-		RTE_MEMPOOL_MZ_FORMAT, mp->name);
-	if (ret < 0 || ret >= (int)sizeof(rg_name))
-		return -ENAMETOOLONG;
-
-	/* ring flags */
-	if (mp->flags & MEMPOOL_F_SP_PUT)
-		rg_flags |= RING_F_SP_ENQ;
-	if (mp->flags & MEMPOOL_F_SC_GET)
-		rg_flags |= RING_F_SC_DEQ;
-
-	/* Allocate the ring that will be used to store objects.
-	 * Ring functions will return appropriate errors if we are
-	 * running as a secondary process etc., so no checks made
-	 * in this function for that condition.
-	 */
-	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
-		mp->socket_id, rg_flags);
-	if (r == NULL)
-		return -rte_errno;
-
-	mp->ring = r;
-	mp->flags |= MEMPOOL_F_RING_CREATED;
-	return 0;
-}
-
 /* free a memchunk allocated with rte_memzone_reserve() */
 static void
 rte_mempool_memchunk_mz_free(__rte_unused struct rte_mempool_memhdr *memhdr,
@@ -354,7 +320,7 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
 	void *elt;
 
 	while (!STAILQ_EMPTY(&mp->elt_list)) {
-		rte_ring_sc_dequeue(mp->ring, &elt);
+		rte_mempool_ops_dequeue_bulk(mp, &elt, 1);
 		(void)elt;
 		STAILQ_REMOVE_HEAD(&mp->elt_list, next);
 		mp->populated_size--;
@@ -386,9 +352,9 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	int ret;
 
 	/* create the internal ring if not already done */
-	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
-		ret = rte_mempool_ring_create(mp);
-		if (ret < 0)
+	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
+		ret = rte_mempool_ops_alloc(mp);
+		if (ret != 0)
 			return ret;
 	}
 
@@ -703,7 +669,7 @@ rte_mempool_free(struct rte_mempool *mp)
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
 
 	rte_mempool_free_memchunks(mp);
-	rte_ring_free(mp->ring);
+	rte_mempool_ops_free(mp);
 	rte_memzone_free(mp->mz);
 }
 
@@ -815,6 +781,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 		RTE_PTR_ADD(mp, MEMPOOL_HEADER_SIZE(mp, 0));
 
 	te->data = mp;
+
 	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
 	TAILQ_INSERT_TAIL(mempool_list, te, next);
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
@@ -844,6 +811,19 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
 	if (mp == NULL)
 		return NULL;
 
+	/*
+	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
+	 * set the correct index into the table of ops structs.
+	 */
+	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
+		rte_mempool_set_ops_byname(mp, "ring_sp_sc");
+	else if (flags & MEMPOOL_F_SP_PUT)
+		rte_mempool_set_ops_byname(mp, "ring_sp_mc");
+	else if (flags & MEMPOOL_F_SC_GET)
+		rte_mempool_set_ops_byname(mp, "ring_mp_sc");
+	else
+		rte_mempool_set_ops_byname(mp, "ring_mp_mc");
+
 	/* call the mempool priv initializer */
 	if (mp_init)
 		mp_init(mp, mp_init_arg);
@@ -930,7 +910,7 @@ rte_mempool_count(const struct rte_mempool *mp)
 	unsigned count;
 	unsigned lcore_id;
 
-	count = rte_ring_count(mp->ring);
+	count = rte_mempool_ops_get_count(mp);
 
 	if (mp->cache_size == 0)
 		return count;
@@ -1119,7 +1099,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 
 	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
 	fprintf(f, "  flags=%x\n", mp->flags);
-	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
+	fprintf(f, "  pool=%p\n", mp->pool_data);
 	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->mz->phys_addr);
 	fprintf(f, "  nb_mem_chunks=%u\n", mp->nb_mem_chunks);
 	fprintf(f, "  size=%"PRIu32"\n", mp->size);
@@ -1140,7 +1120,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 	}
 
 	cache_count = rte_mempool_dump_cache(f, mp);
-	common_count = rte_ring_count(mp->ring);
+	common_count = rte_mempool_ops_get_count(mp);
 	if ((cache_count + common_count) > mp->size)
 		common_count = mp->size - cache_count;
 	fprintf(f, "  common_pool_count=%u\n", common_count);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 60339bd..20b8a68 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -67,6 +67,7 @@
 #include <inttypes.h>
 #include <sys/queue.h>
 
+#include <rte_spinlock.h>
 #include <rte_log.h>
 #include <rte_debug.h>
 #include <rte_lcore.h>
@@ -203,10 +204,13 @@ struct rte_mempool_memhdr {
  */
 struct rte_mempool {
 	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
-	struct rte_ring *ring;           /**< Ring to store objects. */
+	union {
+		void *pool_data;         /**< Ring or pool to store objects */
+		uint64_t pool_id;        /**< External mempool identifier */
+	};
 	const struct rte_memzone *mz;    /**< Memzone where pool is allocated */
 	int flags;                       /**< Flags of the mempool. */
-	int socket_id;                   /**< Socket id passed at mempool creation. */
+	int socket_id;                   /**< Socket id passed at create */
 	uint32_t size;                   /**< Max size of the mempool. */
 	uint32_t cache_size;             /**< Size of per-lcore local cache. */
 	uint32_t cache_flushthresh;
@@ -217,6 +221,14 @@ struct rte_mempool {
 	uint32_t trailer_size;           /**< Size of trailer (after elt). */
 
 	unsigned private_data_size;      /**< Size of private data. */
+	/**
+	 * Index into rte_mempool_ops_table array of mempool ops
+	 * structs, which contain callback function pointers.
+	 * We're using an index here rather than pointers to the callbacks
+	 * to facilitate any secondary processes that may want to use
+	 * this mempool.
+	 */
+	int32_t ops_index;
 
 	struct rte_mempool_cache *local_cache; /**< Per-lcore local cache */
 
@@ -235,7 +247,7 @@ struct rte_mempool {
 #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
 #define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
-#define MEMPOOL_F_RING_CREATED   0x0010 /**< Internal: ring is created */
+#define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created */
 #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
 
 /**
@@ -325,6 +337,212 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 #define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
 #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
 
+#define RTE_MEMPOOL_OPS_NAMESIZE 32 /**< Max length of ops struct name. */
+
+/**
+ * Prototype for implementation specific data provisioning function.
+ *
+ * The function should provide the implementation specific memory for
+ * for use by the other mempool ops functions in a given mempool ops struct.
+ * E.g. the default ops provides an instance of the rte_ring for this purpose.
+ * it will most likely point to a different type of data structure, and
+ * will be transparent to the application programmer.
+ * This function should set mp->pool_data.
+ */
+typedef int (*rte_mempool_alloc_t)(struct rte_mempool *mp);
+
+/**
+ * Free the opaque private data pointed to by mp->pool_data pointer.
+ */
+typedef void (*rte_mempool_free_t)(struct rte_mempool *mp);
+
+/**
+ * Enqueue an object into the external pool.
+ */
+typedef int (*rte_mempool_enqueue_t)(struct rte_mempool *mp,
+		void * const *obj_table, unsigned int n);
+
+/**
+ * Dequeue an object from the external pool.
+ */
+typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
+		void **obj_table, unsigned int n);
+
+/**
+ * Return the number of available objects in the external pool.
+ */
+typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
+
+/** Structure defining mempool operations structure */
+struct rte_mempool_ops {
+	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct */
+	rte_mempool_alloc_t alloc;       /**< Allocate private data */
+	rte_mempool_free_t free;         /**< Free the external pool. */
+	rte_mempool_enqueue_t enqueue;   /**< Enqueue an object. */
+	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
+	rte_mempool_get_count get_count; /**< Get qty of available objs. */
+} __rte_cache_aligned;
+
+#define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
+
+/**
+ * Structure storing the table of registered ops structs, each of which contain
+ * the function pointers for the mempool ops functions.
+ * Each process has its own storage for this ops struct array so that
+ * the mempools can be shared across primary and secondary processes.
+ * The indices used to access the array are valid across processes, whereas
+ * any function pointers stored directly in the mempool struct would not be.
+ * This results in us simply having "ops_index" in the mempool struct.
+ */
+struct rte_mempool_ops_table {
+	rte_spinlock_t sl;     /**< Spinlock for add/delete. */
+	uint32_t num_ops;      /**< Number of used ops structs in the table. */
+	/**
+	 * Storage for all possible ops structs.
+	 */
+	struct rte_mempool_ops ops[RTE_MEMPOOL_MAX_OPS_IDX];
+} __rte_cache_aligned;
+
+/** Array of registered ops structs */
+extern struct rte_mempool_ops_table rte_mempool_ops_table;
+
+/**
+ * @internal Get the mempool ops struct from its index.
+ *
+ * @param ops_index
+ *   The index of the ops struct in the ops struct table. It must be a valid
+ *   index: (0 <= idx < num_ops).
+ * @return
+ *   The pointer to the ops struct in the table.
+ */
+static inline struct rte_mempool_ops *
+rte_mempool_ops_get(int ops_index)
+{
+	RTE_VERIFY(ops_index < RTE_MEMPOOL_MAX_OPS_IDX);
+
+	return &rte_mempool_ops_table.ops[ops_index];
+}
+
+/**
+ * @internal Wrapper for mempool_ops alloc callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   - 0: Success; successfully allocated mempool pool_data
+ *   - <0: Error; code of alloc function.
+ */
+int
+rte_mempool_ops_alloc(struct rte_mempool *mp);
+
+/**
+ * @internal Wrapper for mempool_ops get callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to get.
+ * @return
+ *   - 0: Success; got n objects.
+ *   - <0: Error; code of get function.
+ */
+static inline int
+rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
+		void **obj_table, unsigned n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->dequeue(mp, obj_table, n);
+}
+
+/**
+ * @internal wrapper for mempool_ops put callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to put.
+ * @return
+ *   - 0: Success; n objects supplied.
+ *   - <0: Error; code of put function.
+ */
+static inline int
+rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->enqueue(mp, obj_table, n);
+}
+
+/**
+ * @internal wrapper for mempool_ops get_count callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   The number of available objects in the external pool.
+ */
+unsigned
+rte_mempool_ops_get_count(const struct rte_mempool *mp);
+
+/**
+ * @internal wrapper for mempool_ops free callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ */
+void
+rte_mempool_ops_free(struct rte_mempool *mp);
+
+/**
+ * Set the ops of a mempool
+ *
+ * This can only be done on a mempool that is not populated, i.e. just after
+ * a call to rte_mempool_create_empty().
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param name
+ *   Name of the ops structure to use for this mempool.
+ * @return
+ *   - 0: Success; the mempool is now using the requested ops functions
+ *   - -EINVAL - Invalid ops struct name provided
+ *   - -EEXIST - mempool already has an ops struct assigned
+ */
+int
+rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name);
+
+/**
+ * Register mempool operations
+ *
+ * @param h
+ *   Pointer to and ops structure to register
+ * @return
+ *   - >=0: Success; return the index of the ops struct in the table.
+ *   - -EINVAL - some missing callbacks while registering ops struct
+ *   - -ENOSPC - the maximum number of ops structs has been reached
+ */
+int rte_mempool_ops_register(const struct rte_mempool_ops *ops);
+
+/**
+ * Macro to statically register the ops of an external mempool manager
+ * Note that the rte_mempool_ops_register fails silently here when
+ * more then RTE_MEMPOOL_MAX_OPS_IDX is registered.
+ */
+#define MEMPOOL_REGISTER_OPS(ops)					\
+	void mp_hdlr_init_##ops(void);					\
+	void __attribute__((constructor, used)) mp_hdlr_init_##ops(void)\
+	{								\
+		rte_mempool_ops_register(&ops);			\
+	}
+
 /**
  * An object callback function for mempool.
  *
@@ -774,7 +992,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 	cache->len += n;
 
 	if (cache->len >= flushthresh) {
-		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
+		rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache_size],
 				cache->len - cache_size);
 		cache->len = cache_size;
 	}
@@ -785,19 +1003,10 @@ ring_enqueue:
 
 	/* push remaining objects in ring */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	if (is_mp) {
-		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
-	else {
-		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
+	if (rte_mempool_ops_enqueue_bulk(mp, obj_table, n) < 0)
+		rte_panic("cannot put objects in mempool\n");
 #else
-	if (is_mp)
-		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
-	else
-		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
+	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);
 #endif
 }
 
@@ -945,7 +1154,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 		uint32_t req = n + (cache_size - cache->len);
 
 		/* How many do we require i.e. number to fill the cache + the request */
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
+		ret = rte_mempool_ops_dequeue_bulk(mp,
+			&cache->objs[cache->len], req);
 		if (unlikely(ret < 0)) {
 			/*
 			 * In the offchance that we are buffer constrained,
@@ -972,10 +1182,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 ring_dequeue:
 
 	/* get remaining objects from ring */
-	if (is_mc)
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
-	else
-		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
+	ret = rte_mempool_ops_dequeue_bulk(mp, obj_table, n);
 
 	if (ret < 0)
 		__MEMPOOL_STAT_ADD(mp, get_fail, n);
diff --git a/lib/librte_mempool/rte_mempool_default.c b/lib/librte_mempool/rte_mempool_default.c
new file mode 100644
index 0000000..626786e
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_default.c
@@ -0,0 +1,161 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+
+static int
+common_ring_mp_enqueue(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	return rte_ring_mp_enqueue_bulk(mp->pool_data, obj_table, n);
+}
+
+static int
+common_ring_sp_enqueue(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	return rte_ring_sp_enqueue_bulk(mp->pool_data, obj_table, n);
+}
+
+static int
+common_ring_mc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	return rte_ring_mc_dequeue_bulk(mp->pool_data, obj_table, n);
+}
+
+static int
+common_ring_sc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	return rte_ring_sc_dequeue_bulk(mp->pool_data, obj_table, n);
+}
+
+static unsigned
+common_ring_get_count(const struct rte_mempool *mp)
+{
+	return rte_ring_count(mp->pool_data);
+}
+
+
+static int
+common_ring_alloc(struct rte_mempool *mp)
+{
+	int rg_flags = 0, ret;
+	char rg_name[RTE_RING_NAMESIZE];
+	struct rte_ring *r;
+
+	ret = snprintf(rg_name, sizeof(rg_name),
+		RTE_MEMPOOL_MZ_FORMAT, mp->name);
+	if (ret < 0 || ret >= (int)sizeof(rg_name)) {
+		rte_errno = ENAMETOOLONG;
+		return -rte_errno;
+	}
+
+	/* ring flags */
+	if (mp->flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (mp->flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	/*
+	 * Allocate the ring that will be used to store objects.
+	 * Ring functions will return appropriate errors if we are
+	 * running as a secondary process etc., so no checks made
+	 * in this function for that condition.
+	 */
+	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+		mp->socket_id, rg_flags);
+	if (r == NULL)
+		return -rte_errno;
+
+	mp->pool_data = r;
+
+	return 0;
+}
+
+static void
+common_ring_free(struct rte_mempool *mp)
+{
+	rte_ring_free(mp->pool_data);
+}
+
+/*
+ * The following 4 declarations of mempool ops structs address
+ * the need for the backward compatible mempool managers for
+ * single/multi producers and single/multi consumers as dictated by the
+ * flags provided to the rte_mempool_create function
+ */
+static const struct rte_mempool_ops ops_mp_mc = {
+	.name = "ring_mp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_mp_enqueue,
+	.dequeue = common_ring_mc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_sp_sc = {
+	.name = "ring_sp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_sp_enqueue,
+	.dequeue = common_ring_sc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_mp_sc = {
+	.name = "ring_mp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_mp_enqueue,
+	.dequeue = common_ring_sc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_sp_mc = {
+	.name = "ring_sp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_sp_enqueue,
+	.dequeue = common_ring_mc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(ops_mp_mc);
+MEMPOOL_REGISTER_OPS(ops_sp_sc);
+MEMPOOL_REGISTER_OPS(ops_mp_sc);
+MEMPOOL_REGISTER_OPS(ops_sp_mc);
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
new file mode 100644
index 0000000..c1cd4e7
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -0,0 +1,148 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2016 6WIND S.A.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_mempool.h>
+#include <rte_errno.h>
+
+/* indirect jump table to support external memory pools */
+struct rte_mempool_ops_table rte_mempool_ops_table = {
+	.sl =  RTE_SPINLOCK_INITIALIZER,
+	.num_ops = 0
+};
+
+/* add a new ops struct in rte_mempool_ops_table, return its index */
+int
+rte_mempool_ops_register(const struct rte_mempool_ops *h)
+{
+	struct rte_mempool_ops *ops;
+	int16_t ops_index;
+
+	rte_spinlock_lock(&rte_mempool_ops_table.sl);
+
+	if (rte_mempool_ops_table.num_ops >=
+			RTE_MEMPOOL_MAX_OPS_IDX) {
+		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Maximum number of mempool ops structs exceeded\n");
+		return -ENOSPC;
+	}
+
+	if (h->alloc == NULL || h->enqueue == NULL ||
+			h->dequeue == NULL || h->get_count == NULL) {
+		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Missing callback while registering mempool ops\n");
+		return -EINVAL;
+	}
+
+	if (strlen(h->name) >= sizeof(ops->name) - 1) {
+		RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n",
+				__func__, h->name);
+		rte_errno = EEXIST;
+		return -EEXIST;
+	}
+
+	ops_index = rte_mempool_ops_table.num_ops++;
+	ops = &rte_mempool_ops_table.ops[ops_index];
+	snprintf(ops->name, sizeof(ops->name), "%s", h->name);
+	ops->alloc = h->alloc;
+	ops->enqueue = h->enqueue;
+	ops->dequeue = h->dequeue;
+	ops->get_count = h->get_count;
+
+	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+
+	return ops_index;
+}
+
+/* wrapper to allocate an external mempool's private (pool) data */
+int
+rte_mempool_ops_alloc(struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->alloc(mp);
+}
+
+/* wrapper to free an external pool ops */
+void
+rte_mempool_ops_free(struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	if (ops->free == NULL)
+		return;
+	return ops->free(mp);
+}
+
+/* wrapper to get available objects in an external mempool */
+unsigned int
+rte_mempool_ops_get_count(const struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->get_count(mp);
+}
+
+/* sets mempool ops previously registered by rte_mempool_ops_register */
+int
+rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)
+{
+	struct rte_mempool_ops *ops = NULL;
+	unsigned i;
+
+	/* too late, the mempool is already populated */
+	if (mp->flags & MEMPOOL_F_POOL_CREATED)
+		return -EEXIST;
+
+	for (i = 0; i < rte_mempool_ops_table.num_ops; i++) {
+		if (!strcmp(name,
+				rte_mempool_ops_table.ops[i].name)) {
+			ops = &rte_mempool_ops_table.ops[i];
+			break;
+		}
+	}
+
+	if (ops == NULL)
+		return -EINVAL;
+
+	mp->ops_index = i;
+	return 0;
+}
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index f63461b..f49c205 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -19,6 +19,8 @@ DPDK_2.0 {
 DPDK_16.7 {
 	global:
 
+	rte_mempool_ops_table;
+
 	rte_mempool_check_cookies;
 	rte_mempool_obj_iter;
 	rte_mempool_mem_iter;
@@ -29,6 +31,8 @@ DPDK_16.7 {
 	rte_mempool_populate_default;
 	rte_mempool_populate_anon;
 	rte_mempool_free;
+	rte_mempool_set_ops_byname;
+	rte_mempool_ops_register;
 
 	local: *;
 } DPDK_2.0;
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v10 2/3] app/test: test external mempool manager
  2016-06-14  9:46                 ` [PATCH v10 0/3] mempool: add external mempool manager David Hunt
  2016-06-14  9:46                   ` [PATCH v10 1/3] mempool: support external mempool operations David Hunt
@ 2016-06-14  9:46                   ` David Hunt
  2016-06-14 11:39                     ` Shreyansh Jain
  2016-06-14  9:46                   ` [PATCH v10 3/3] mbuf: make default mempool ops configurable at build David Hunt
                                     ` (2 subsequent siblings)
  4 siblings, 1 reply; 238+ messages in thread
From: David Hunt @ 2016-06-14  9:46 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain, David Hunt

Use a minimal custom mempool external ops and check that it also
passes basic mempool autotests.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
 app/test/test_mempool.c | 122 +++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 120 insertions(+), 2 deletions(-)

diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index b586249..bcf379b 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -83,6 +83,99 @@
 static rte_atomic32_t synchro;
 
 /*
+ * Simple example of custom mempool structure. Holds pointers to all the
+ * elements which are simply malloc'd in this example.
+ */
+struct custom_mempool {
+	rte_spinlock_t lock;
+	unsigned count;
+	unsigned size;
+	void *elts[];
+};
+
+/*
+ * Loop through all the element pointers and allocate a chunk of memory, then
+ * insert that memory into the ring.
+ */
+static int
+custom_mempool_alloc(struct rte_mempool *mp)
+{
+	struct custom_mempool *cm;
+
+	cm = rte_zmalloc("custom_mempool",
+		sizeof(struct custom_mempool) + mp->size * sizeof(void *), 0);
+	if (cm == NULL)
+		return -ENOMEM;
+
+	rte_spinlock_init(&cm->lock);
+	cm->count = 0;
+	cm->size = mp->size;
+	mp->pool_data = cm;
+	return 0;
+}
+
+static void
+custom_mempool_free(struct rte_mempool *mp)
+{
+	rte_free((void *)(mp->pool_data));
+}
+
+static int
+custom_mempool_enqueue(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (cm->count + n > cm->size) {
+		ret = -ENOBUFS;
+	} else {
+		memcpy(&cm->elts[cm->count], obj_table, sizeof(void *) * n);
+		cm->count += n;
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+
+static int
+custom_mempool_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (n > cm->count) {
+		ret = -ENOENT;
+	} else {
+		cm->count -= n;
+		memcpy(obj_table, &cm->elts[cm->count], sizeof(void *) * n);
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+static unsigned
+custom_mempool_get_count(const struct rte_mempool *mp)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+
+	return cm->count;
+}
+
+static struct rte_mempool_ops mempool_ops_custom = {
+	.name = "custom_handler",
+	.alloc = custom_mempool_alloc,
+	.free = custom_mempool_free,
+	.enqueue = custom_mempool_enqueue,
+	.dequeue = custom_mempool_dequeue,
+	.get_count = custom_mempool_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(mempool_ops_custom);
+
+/*
  * save the object number in the first 4 bytes of object data. All
  * other bytes are set to 0.
  */
@@ -292,12 +385,14 @@ static int test_mempool_single_consumer(void)
  * test function for mempool test based on singple consumer and single producer,
  * can run on one lcore only
  */
-static int test_mempool_launch_single_consumer(__attribute__((unused)) void *arg)
+static int
+test_mempool_launch_single_consumer(__attribute__((unused)) void *arg)
 {
 	return test_mempool_single_consumer();
 }
 
-static void my_mp_init(struct rte_mempool * mp, __attribute__((unused)) void * arg)
+static void
+my_mp_init(struct rte_mempool *mp, __attribute__((unused)) void *arg)
 {
 	printf("mempool name is %s\n", mp->name);
 	/* nothing to be implemented here*/
@@ -477,6 +572,7 @@ test_mempool(void)
 {
 	struct rte_mempool *mp_cache = NULL;
 	struct rte_mempool *mp_nocache = NULL;
+	struct rte_mempool *mp_ext = NULL;
 
 	rte_atomic32_init(&synchro);
 
@@ -505,6 +601,27 @@ test_mempool(void)
 		goto err;
 	}
 
+	/* create a mempool with an external handler */
+	mp_ext = rte_mempool_create_empty("test_ext",
+		MEMPOOL_SIZE,
+		MEMPOOL_ELT_SIZE,
+		RTE_MEMPOOL_CACHE_MAX_SIZE, 0,
+		SOCKET_ID_ANY, 0);
+
+	if (mp_ext == NULL) {
+		printf("cannot allocate mp_ext mempool\n");
+		goto err;
+	}
+	if (rte_mempool_set_ops_byname(mp_ext, "custom_handler") < 0) {
+		printf("cannot set custom handler\n");
+		goto err;
+	}
+	if (rte_mempool_populate_default(mp_ext) < 0) {
+		printf("cannot populate mp_ext mempool\n");
+		goto err;
+	}
+	rte_mempool_obj_iter(mp_ext, my_obj_init, NULL);
+
 	/* retrieve the mempool from its name */
 	if (rte_mempool_lookup("test_nocache") != mp_nocache) {
 		printf("Cannot lookup mempool from its name\n");
@@ -545,6 +662,7 @@ test_mempool(void)
 err:
 	rte_mempool_free(mp_nocache);
 	rte_mempool_free(mp_cache);
+	rte_mempool_free(mp_ext);
 	return -1;
 }
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v10 3/3] mbuf: make default mempool ops configurable at build
  2016-06-14  9:46                 ` [PATCH v10 0/3] mempool: add external mempool manager David Hunt
  2016-06-14  9:46                   ` [PATCH v10 1/3] mempool: support external mempool operations David Hunt
  2016-06-14  9:46                   ` [PATCH v10 2/3] app/test: test external mempool manager David Hunt
@ 2016-06-14  9:46                   ` David Hunt
  2016-06-14 11:45                     ` Shreyansh Jain
  2016-06-14 12:32                   ` [PATCH v10 0/3] mempool: add external mempool manager Olivier MATZ
  2016-06-14 15:48                   ` [PATCH v11 " David Hunt
  4 siblings, 1 reply; 238+ messages in thread
From: David Hunt @ 2016-06-14  9:46 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain, David Hunt

By default, the mempool ops used for mbuf allocations is a multi
producer and multi consumer ring. We could imagine a target (maybe some
network processors?) that provides an hardware-assisted pool
mechanism. In this case, the default configuration for this architecture
would contain a different value for RTE_MBUF_DEFAULT_MEMPOOL_OPS.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
 config/common_base         |  1 +
 lib/librte_mbuf/rte_mbuf.c | 26 ++++++++++++++++++++++----
 2 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/config/common_base b/config/common_base
index 47c26f6..899c038 100644
--- a/config/common_base
+++ b/config/common_base
@@ -394,6 +394,7 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
 #
 CONFIG_RTE_LIBRTE_MBUF=y
 CONFIG_RTE_LIBRTE_MBUF_DEBUG=n
+CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="ring_mp_mc"
 CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index eec1456..491230c 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -153,6 +153,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	unsigned cache_size, uint16_t priv_size, uint16_t data_room_size,
 	int socket_id)
 {
+	struct rte_mempool *mp;
 	struct rte_pktmbuf_pool_private mbp_priv;
 	unsigned elt_size;
 
@@ -167,10 +168,27 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	mbp_priv.mbuf_data_room_size = data_room_size;
 	mbp_priv.mbuf_priv_size = priv_size;
 
-	return rte_mempool_create(name, n, elt_size,
-		cache_size, sizeof(struct rte_pktmbuf_pool_private),
-		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
-		socket_id, 0);
+	mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
+		 sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
+	if (mp == NULL)
+		return NULL;
+
+	rte_errno = rte_mempool_set_ops_byname(mp,
+			RTE_MBUF_DEFAULT_MEMPOOL_OPS);
+	if (rte_errno != 0) {
+		RTE_LOG(ERR, MBUF, "error setting mempool handler\n");
+		return NULL;
+	}
+	rte_pktmbuf_pool_init(mp, &mbp_priv);
+
+	if (rte_mempool_populate_default(mp) < 0) {
+		rte_mempool_free(mp);
+		return NULL;
+	}
+
+	rte_mempool_obj_iter(mp, rte_pktmbuf_init, NULL);
+
+	return mp;
 }
 
 /* do some sanity checks on a mbuf: panic if it fails */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* Re: [PATCH v10 1/3] mempool: support external mempool operations
  2016-06-14  9:46                   ` [PATCH v10 1/3] mempool: support external mempool operations David Hunt
@ 2016-06-14 11:38                     ` Shreyansh Jain
  2016-06-14 12:55                     ` Thomas Monjalon
  1 sibling, 0 replies; 238+ messages in thread
From: Shreyansh Jain @ 2016-06-14 11:38 UTC (permalink / raw)
  To: David Hunt, dev; +Cc: olivier.matz, viktorin, jerin.jacob

Hi,

> -----Original Message-----
> From: David Hunt [mailto:david.hunt@intel.com]
> Sent: Tuesday, June 14, 2016 3:16 PM
> To: dev@dpdk.org
> Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
> jerin.jacob@caviumnetworks.com; Shreyansh Jain <shreyansh.jain@nxp.com>;
> David Hunt <david.hunt@intel.com>
> Subject: [PATCH v10 1/3] mempool: support external mempool operations
> 
> Until now, the objects stored in a mempool were internally stored in a
> ring. This patch introduces the possibility to register external handlers
> replacing the ring.
> 
> The default behavior remains unchanged, but calling the new function
> rte_mempool_set_handler() right after rte_mempool_create_empty() allows
> the user to change the handler that will be used when populating
> the mempool.
> 
> This patch also adds a set of default ops (function callbacks) based
> on rte_ring.
> 
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: David Hunt <david.hunt@intel.com>

Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v10 2/3] app/test: test external mempool manager
  2016-06-14  9:46                   ` [PATCH v10 2/3] app/test: test external mempool manager David Hunt
@ 2016-06-14 11:39                     ` Shreyansh Jain
  0 siblings, 0 replies; 238+ messages in thread
From: Shreyansh Jain @ 2016-06-14 11:39 UTC (permalink / raw)
  To: David Hunt, dev; +Cc: olivier.matz, viktorin, jerin.jacob

> -----Original Message-----
> From: David Hunt [mailto:david.hunt@intel.com]
> Sent: Tuesday, June 14, 2016 3:16 PM
> To: dev@dpdk.org
> Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
> jerin.jacob@caviumnetworks.com; Shreyansh Jain <shreyansh.jain@nxp.com>;
> David Hunt <david.hunt@intel.com>
> Subject: [PATCH v10 2/3] app/test: test external mempool manager
> 
> Use a minimal custom mempool external ops and check that it also
> passes basic mempool autotests.
> 
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: David Hunt <david.hunt@intel.com>

Acked-by: Shreyansh Jain <Shreyansh.jain@nxp.com>

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v10 3/3] mbuf: make default mempool ops configurable at build
  2016-06-14  9:46                   ` [PATCH v10 3/3] mbuf: make default mempool ops configurable at build David Hunt
@ 2016-06-14 11:45                     ` Shreyansh Jain
  0 siblings, 0 replies; 238+ messages in thread
From: Shreyansh Jain @ 2016-06-14 11:45 UTC (permalink / raw)
  To: David Hunt, dev; +Cc: olivier.matz, viktorin, jerin.jacob

> -----Original Message-----
> From: David Hunt [mailto:david.hunt@intel.com]
> Sent: Tuesday, June 14, 2016 3:16 PM
> To: dev@dpdk.org
> Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
> jerin.jacob@caviumnetworks.com; Shreyansh Jain <shreyansh.jain@nxp.com>;
> David Hunt <david.hunt@intel.com>
> Subject: [PATCH v10 3/3] mbuf: make default mempool ops configurable at build
> 
> By default, the mempool ops used for mbuf allocations is a multi
> producer and multi consumer ring. We could imagine a target (maybe some
> network processors?) that provides an hardware-assisted pool
> mechanism. In this case, the default configuration for this architecture
> would contain a different value for RTE_MBUF_DEFAULT_MEMPOOL_OPS.
> 
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: David Hunt <david.hunt@intel.com>

Acked-by: Shreyansh Jain <Shreyansh.jain@nxp.com>

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v10 0/3] mempool: add external mempool manager
  2016-06-14  9:46                 ` [PATCH v10 0/3] mempool: add external mempool manager David Hunt
                                     ` (2 preceding siblings ...)
  2016-06-14  9:46                   ` [PATCH v10 3/3] mbuf: make default mempool ops configurable at build David Hunt
@ 2016-06-14 12:32                   ` Olivier MATZ
  2016-06-14 15:48                   ` [PATCH v11 " David Hunt
  4 siblings, 0 replies; 238+ messages in thread
From: Olivier MATZ @ 2016-06-14 12:32 UTC (permalink / raw)
  To: David Hunt, dev; +Cc: viktorin, jerin.jacob, shreyansh.jain



On 06/14/2016 11:46 AM, David Hunt wrote:
> Here's the latest version of the External Mempool Manager patchset.
> It's re-based on top of the latest head as of 14/6/2016, including
> Olivier's 35-part patch series on mempool re-org [1]
>
> [1] http://dpdk.org/ml/archives/dev/2016-May/039229.html

Thanks David for working on this!

Series
Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v10 1/3] mempool: support external mempool operations
  2016-06-14  9:46                   ` [PATCH v10 1/3] mempool: support external mempool operations David Hunt
  2016-06-14 11:38                     ` Shreyansh Jain
@ 2016-06-14 12:55                     ` Thomas Monjalon
  2016-06-14 13:20                       ` Hunt, David
  1 sibling, 1 reply; 238+ messages in thread
From: Thomas Monjalon @ 2016-06-14 12:55 UTC (permalink / raw)
  To: David Hunt; +Cc: dev, olivier.matz, viktorin, jerin.jacob, shreyansh.jain

Hi David,

2016-06-14 10:46, David Hunt:
> Until now, the objects stored in a mempool were internally stored in a
> ring. This patch introduces the possibility to register external handlers
> replacing the ring.
> 
> The default behavior remains unchanged, but calling the new function
> rte_mempool_set_handler() right after rte_mempool_create_empty() allows
> the user to change the handler that will be used when populating
> the mempool.
> 
> This patch also adds a set of default ops (function callbacks) based
> on rte_ring.
> 
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: David Hunt <david.hunt@intel.com>

Glad to see we are close to have this feature integrated.

I've just looked into few details before pushing.
One of them are the comments. In mempool they were all ended by a dot.
Please check the new comments.

The doc/guides/rel_notes/deprecation.rst must be updated to remove
the deprecation notice in this patch.

Isn't there some explanations to add in
doc/guides/prog_guide/mempool_lib.rst?

Isn't there a better name than "default" for the default implementation?
I don't think the filename rte_mempool_default.c is meaningful.

> +/**
> + * Register mempool operations
> + *
> + * @param h
> + *   Pointer to and ops structure to register

The parameter name and its description are not correct.

> + * @return
> + *   - >=0: Success; return the index of the ops struct in the table.
> + *   - -EINVAL - some missing callbacks while registering ops struct
> + *   - -ENOSPC - the maximum number of ops structs has been reached
> + */
> +int rte_mempool_ops_register(const struct rte_mempool_ops *ops);

You can check the doc with doxygen:
	make doc-api-html

> --- a/lib/librte_mempool/rte_mempool_version.map
> +++ b/lib/librte_mempool/rte_mempool_version.map
> @@ -19,6 +19,8 @@ DPDK_2.0 {
>  DPDK_16.7 {
>  	global:
>  
> +	rte_mempool_ops_table;
> +

Why this empty line?

>  	rte_mempool_check_cookies;
>  	rte_mempool_obj_iter;
>  	rte_mempool_mem_iter;
> @@ -29,6 +31,8 @@ DPDK_16.7 {
>  	rte_mempool_populate_default;
>  	rte_mempool_populate_anon;
>  	rte_mempool_free;
> +	rte_mempool_set_ops_byname;
> +	rte_mempool_ops_register;

Please keep it in alphabetical order.
It seems the order was not respected before in mempool.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v10 1/3] mempool: support external mempool operations
  2016-06-14 12:55                     ` Thomas Monjalon
@ 2016-06-14 13:20                       ` Hunt, David
  2016-06-14 13:29                         ` Thomas Monjalon
  0 siblings, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-06-14 13:20 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, olivier.matz, viktorin, jerin.jacob, shreyansh.jain


Hi Thomas,

On 14/6/2016 1:55 PM, Thomas Monjalon wrote:
> Hi David,
>
> 2016-06-14 10:46, David Hunt:
>> Until now, the objects stored in a mempool were internally stored in a
>> ring. This patch introduces the possibility to register external handlers
>> replacing the ring.
>>
>> The default behavior remains unchanged, but calling the new function
>> rte_mempool_set_handler() right after rte_mempool_create_empty() allows
>> the user to change the handler that will be used when populating
>> the mempool.
>>
>> This patch also adds a set of default ops (function callbacks) based
>> on rte_ring.
>>
>> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
>> Signed-off-by: David Hunt <david.hunt@intel.com>
> Glad to see we are close to have this feature integrated.
>
> I've just looked into few details before pushing.
> One of them are the comments. In mempool they were all ended by a dot.
> Please check the new comments.

Do you mean the rte_mempool struct definition, or all comments? Shall I 
leave the
old comments the way they were before the change, or will I clean up?
If I clean up, I'd suggest I add a separate patch for that.

> The doc/guides/rel_notes/deprecation.rst must be updated to remove
> the deprecation notice in this patch.

Will do. As a separate patch in the set?

> Isn't there some explanations to add in
> doc/guides/prog_guide/mempool_lib.rst?

Yes, I'll adapt some of the cover letter, and add as a separate patch.

> Isn't there a better name than "default" for the default implementation?
> I don't think the filename rte_mempool_default.c is meaningful.

I could call it rte_mempool_ring.c? Since the default handler is ring based?

>> +/**
>> + * Register mempool operations
>> + *
>> + * @param h
>> + *   Pointer to and ops structure to register
> The parameter name and its description are not correct.

Will fix.

>> + * @return
>> + *   - >=0: Success; return the index of the ops struct in the table.
>> + *   - -EINVAL - some missing callbacks while registering ops struct
>> + *   - -ENOSPC - the maximum number of ops structs has been reached
>> + */
>> +int rte_mempool_ops_register(const struct rte_mempool_ops *ops);
> You can check the doc with doxygen:
> 	make doc-api-html

Will do.


>> --- a/lib/librte_mempool/rte_mempool_version.map
>> +++ b/lib/librte_mempool/rte_mempool_version.map
>> @@ -19,6 +19,8 @@ DPDK_2.0 {
>>   DPDK_16.7 {
>>   	global:
>>   
>> +	rte_mempool_ops_table;
>> +
> Why this empty line?

No particular reason. I will remove.

>>   	rte_mempool_check_cookies;
>>   	rte_mempool_obj_iter;
>>   	rte_mempool_mem_iter;
>> @@ -29,6 +31,8 @@ DPDK_16.7 {
>>   	rte_mempool_populate_default;
>>   	rte_mempool_populate_anon;
>>   	rte_mempool_free;
>> +	rte_mempool_set_ops_byname;
>> +	rte_mempool_ops_register;
> Please keep it in alphabetical order.
> It seems the order was not respected before in mempool.

I will fix this also.

Regards,
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v10 1/3] mempool: support external mempool operations
  2016-06-14 13:20                       ` Hunt, David
@ 2016-06-14 13:29                         ` Thomas Monjalon
  0 siblings, 0 replies; 238+ messages in thread
From: Thomas Monjalon @ 2016-06-14 13:29 UTC (permalink / raw)
  To: Hunt, David
  Cc: dev, olivier.matz, viktorin, jerin.jacob, shreyansh.jain, Mcnamara, John

2016-06-14 14:20, Hunt, David:
> 
> Hi Thomas,
> 
> On 14/6/2016 1:55 PM, Thomas Monjalon wrote:
> > Hi David,
> >
> > 2016-06-14 10:46, David Hunt:
> >> Until now, the objects stored in a mempool were internally stored in a
> >> ring. This patch introduces the possibility to register external handlers
> >> replacing the ring.
> >>
> >> The default behavior remains unchanged, but calling the new function
> >> rte_mempool_set_handler() right after rte_mempool_create_empty() allows
> >> the user to change the handler that will be used when populating
> >> the mempool.
> >>
> >> This patch also adds a set of default ops (function callbacks) based
> >> on rte_ring.
> >>
> >> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> >> Signed-off-by: David Hunt <david.hunt@intel.com>
> > Glad to see we are close to have this feature integrated.
> >
> > I've just looked into few details before pushing.
> > One of them are the comments. In mempool they were all ended by a dot.
> > Please check the new comments.
> 
> Do you mean the rte_mempool struct definition, or all comments? Shall I 
> leave the
> old comments the way they were before the change, or will I clean up?
> If I clean up, I'd suggest I add a separate patch for that.

Just check and clean the comments added in this patch.

> > The doc/guides/rel_notes/deprecation.rst must be updated to remove
> > the deprecation notice in this patch.
> 
> Will do. As a separate patch in the set?

In this patch.

> > Isn't there some explanations to add in
> > doc/guides/prog_guide/mempool_lib.rst?
> 
> Yes, I'll adapt some of the cover letter, and add as a separate patch.

It is OK (and better) to add it in this patch.
Maybe you can request John's help for doc review.

> > Isn't there a better name than "default" for the default implementation?
> > I don't think the filename rte_mempool_default.c is meaningful.
> 
> I could call it rte_mempool_ring.c? Since the default handler is ring based?

It is an idea.

Thanks

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v11 0/3] mempool: add external mempool manager
  2016-06-14  9:46                 ` [PATCH v10 0/3] mempool: add external mempool manager David Hunt
                                     ` (3 preceding siblings ...)
  2016-06-14 12:32                   ` [PATCH v10 0/3] mempool: add external mempool manager Olivier MATZ
@ 2016-06-14 15:48                   ` David Hunt
  2016-06-14 15:48                     ` [PATCH v11 1/3] mempool: support external mempool operations David Hunt
                                       ` (3 more replies)
  4 siblings, 4 replies; 238+ messages in thread
From: David Hunt @ 2016-06-14 15:48 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain

Here's the latest version of the External Mempool Manager patchset.
It's re-based on top of the latest head as of 14/6/2016, including
Olivier's 35-part patch series on mempool re-org [1]

[1] http://dpdk.org/ml/archives/dev/2016-May/039229.html

v11 changes:

 * Fixed comments (added '.' where needed for consistency)
 * removed ABI breakage notice for mempool manager in deprecation.rst
 * Added description of the external mempool manager functionality to
   doc/guides/prog_guide/mempool_lib.rst (John Mc reviewed)
 * renamed rte_mempool_default.c to rte_mempool_ring.c
 * Kept the v10 ACK from Shreyansh and Olivier for v11

v10 changes:

 * changed the _put/_get op names to _enqueue/_dequeue to be consistent
   with the function names
 * some rte_errno cleanup
 * comment tweaks about when to set pool_data
 * removed an un-needed check for ops->alloc == NULL

v9 changes:

 * added a check for NULL alloc in rte_mempool_ops_register
 * rte_mempool_alloc_t now returns int instead of void*
 * fixed some comment typo's
 * removed some unneeded typecasts
 * changed a return NULL to return -EEXIST in rte_mempool_ops_register
 * fixed rte_mempool_version.map file so builds ok as shared libs
 * moved flags check from rte_mempool_create_empty to rte_mempool_create

v8 changes:

 * merged first three patches in the series into one.
 * changed parameters to ops callback to all be rte_mempool pointer
   rather than than pointer to opaque data or uint64.
 * comment fixes.
 * fixed parameter to _free function (was inconsistent).
 * changed MEMPOOL_F_RING_CREATED to MEMPOOL_F_POOL_CREATED

v7 changes:

 * Changed rte_mempool_handler_table to rte_mempool_ops_table
 * Changed hander_idx to ops_index in rte_mempool struct
 * Reworked comments in rte_mempool.h around ops functions
 * Changed rte_mempool_hander.c to rte_mempool_ops.c
 * Changed all functions containing _handler_ to _ops_
 * Now there is no mention of 'handler' left
 * Other small changes out of review of mailing list

v6 changes:

 * Moved the flags handling from rte_mempool_create_empty to
   rte_mempool_create, as it's only there for backward compatibility
 * Various comment additions and cleanup
 * Renamed rte_mempool_handler to rte_mempool_ops
 * Added a union for *pool and u64 pool_id in struct rte_mempool
 * split the original patch into a few parts for easier review.
 * rename functions with _ext_ to _ops_.
 * addressed review comments
 * renamed put and get functions to enqueue and dequeue
 * changed occurences of rte_mempool_ops to const, as they
   contain function pointers (security)
 * split out the default external mempool handler into a separate
   patch for easier review

v5 changes:
 * rebasing, as it is dependent on another patch series [1]

v4 changes (Olivier Matz):
 * remove the rte_mempool_create_ext() function. To change the handler, the
   user has to do the following:
   - mp = rte_mempool_create_empty()
   - rte_mempool_set_handler(mp, "my_handler")
   - rte_mempool_populate_default(mp)
   This avoids to add another function with more than 10 arguments, duplicating
   the doxygen comments
 * change the api of rte_mempool_alloc_t: only the mempool pointer is required
   as all information is available in it
 * change the api of rte_mempool_free_t: remove return value
 * move inline wrapper functions from the .c to the .h (else they won't be
   inlined). This implies to have one header file (rte_mempool.h), or it
   would have generate cross dependencies issues.
 * remove now unused MEMPOOL_F_INT_HANDLER (note: it was misused anyway due
   to the use of && instead of &)
 * fix build in debug mode (__MEMPOOL_STAT_ADD(mp, put_pool, n) remaining)
 * fix build with shared libraries (global handler has to be declared in
   the .map file)
 * rationalize #include order
 * remove unused function rte_mempool_get_handler_name()
 * rename some structures, fields, functions
 * remove the static in front of rte_tailq_elem rte_mempool_tailq (comment
   from Yuanhan)
 * test the ext mempool handler in the same file than standard mempool tests,
   avoiding to duplicate the code
 * rework the custom handler in mempool_test
 * rework a bit the patch selecting default mbuf pool handler
 * fix some doxygen comments

v3 changes:
 * simplified the file layout, renamed to rte_mempool_handler.[hc]
 * moved the default handlers into rte_mempool_default.c
 * moved the example handler out into app/test/test_ext_mempool.c
 * removed is_mc/is_mp change, slight perf degredation on sp cached operation
 * removed stack hanler, may re-introduce at a later date
 * Changes out of code reviews

v2 changes:
 * There was a lot of duplicate code between rte_mempool_xmem_create and
   rte_mempool_create_ext. This has now been refactored and is now
   hopefully cleaner.
 * The RTE_NEXT_ABI define is now used to allow building of the library
   in a format that is compatible with binaries built against previous
   versions of DPDK.
 * Changes out of code reviews. Hopefully I've got most of them included.

The External Mempool Manager is an extension to the mempool API that allows
users to add and use an external mempool manager, which allows external memory
subsystems such as external hardware memory management systems and software
based memory allocators to be used with DPDK.

The existing API to the internal DPDK mempool manager will remain unchanged
and will be backward compatible. However, there will be an ABI breakage, as
the mempool struct is changing. These changes are all contained withing
RTE_NEXT_ABI defs, and the current or next code can be changed with
the CONFIG_RTE_NEXT_ABI config setting

There are two aspects to external mempool manager.
  1. Adding the code for your new mempool operations (ops). This is
     achieved by adding a new mempool ops source file into the
     librte_mempool library, and using the REGISTER_MEMPOOL_OPS macro.
  2. Using the new API to call rte_mempool_create_empty and
     rte_mempool_set_ops_byname to create a new mempool
     using the name parameter to identify which ops to use.

New API calls added
 1. A new rte_mempool_create_empty() function
 2. rte_mempool_set_ops_byname() which sets the mempool's ops (functions)
 3. An rte_mempool_populate_default() and rte_mempool_populate_anon() functions
    which populates the mempool using the relevant ops

Several external mempool managers may be used in the same application. A new
mempool can then be created by using the new rte_mempool_create_empty function,
then calling rte_mempool_set_ops_byname to point the mempool to the relevant
mempool manager callback structure.

Legacy applications will continue to use the old rte_mempool_create API call,
which uses a ring based mempool manager by default. These applications
will need to be modified to use a new external mempool manager.

The external mempool manager needs to provide the following functions.
 1. alloc     - allocates the mempool memory, and adds each object onto a ring
 2. enqueue   - puts an object back into the mempool once an application has
                finished with it
 3. dequeue   - gets an object from the mempool for use by the application
 4. get_count - gets the number of available objects in the mempool
 5. free      - frees the mempool memory

Every time an enqueue/dequeue/get_count is called from the application/PMD,
the callback for that mempool is called. These functions are in the fastpath,
and any unoptimised ops may limit performance.

The new APIs are as follows:

1. rte_mempool_create_empty

struct rte_mempool *
rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
    unsigned cache_size, unsigned private_data_size,
    int socket_id, unsigned flags);

2. rte_mempool_set_ops_byname()

int
rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name);

3. rte_mempool_populate_default()

int rte_mempool_populate_default(struct rte_mempool *mp);

4. rte_mempool_populate_anon()

int rte_mempool_populate_anon(struct rte_mempool *mp);

Please see rte_mempool.h for further information on the parameters.


The important thing to note is that the mempool ops struct is passed by name
to rte_mempool_set_ops_byname, which looks through the ops struct array to
get the ops_index, which is then stored in the rte_memool structure. This
allow multiple processes to use the same mempool, as the function pointers
are accessed via ops index.

The mempool ops structure contains callbacks to the implementation of
the ops function, and is set up for registration as follows:

static const struct rte_mempool_ops ops_sp_mc = {
    .name = "ring_sp_mc",
    .alloc = rte_mempool_common_ring_alloc,
    .enqueue = common_ring_sp_enqueue,
    .dequeue = common_ring_mc_dequeue,
    .get_count = common_ring_get_count,
    .free = common_ring_free,
};

And then the following macro will register the ops in the array of ops
structures

REGISTER_MEMPOOL_OPS(ops_mp_mc);

For an example of API usage, please see app/test/test_mempool.c, which
implements a rudimentary "custom_handler" mempool manager using simple mallocs
for each mempool object. This file also contains the callbacks and self
registration for the new handler.

David Hunt (2):
  mempool: support external mempool operations
  mbuf: make default mempool ops configurable at build

Olivier Matz (1):
  app/test: test external mempool handler

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v11 1/3] mempool: support external mempool operations
  2016-06-14 15:48                   ` [PATCH v11 " David Hunt
@ 2016-06-14 15:48                     ` David Hunt
  2016-06-14 16:08                       ` Thomas Monjalon
  2016-06-14 15:49                     ` [PATCH v11 2/3] app/test: test external mempool manager David Hunt
                                       ` (2 subsequent siblings)
  3 siblings, 1 reply; 238+ messages in thread
From: David Hunt @ 2016-06-14 15:48 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain, David Hunt

Until now, the objects stored in a mempool were internally stored in a
ring. This patch introduces the possibility to register external handlers
replacing the ring.

The default behavior remains unchanged, but calling the new function
rte_mempool_set_ops_byname() right after rte_mempool_create_empty() allows
the user to change the handler that will be used when populating
the mempool.

This patch also adds a set of default ops (function callbacks) based
on rte_ring.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test/test_mempool_perf.c               |   1 -
 doc/guides/prog_guide/mempool_lib.rst      |  31 +++-
 doc/guides/rel_notes/deprecation.rst       |   9 --
 lib/librte_mempool/Makefile                |   2 +
 lib/librte_mempool/rte_mempool.c           |  66 +++-----
 lib/librte_mempool/rte_mempool.h           | 251 ++++++++++++++++++++++++++---
 lib/librte_mempool/rte_mempool_ops.c       | 148 +++++++++++++++++
 lib/librte_mempool/rte_mempool_ring.c      | 161 ++++++++++++++++++
 lib/librte_mempool/rte_mempool_version.map |  13 +-
 9 files changed, 601 insertions(+), 81 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_ops.c
 create mode 100644 lib/librte_mempool/rte_mempool_ring.c

diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index c5e3576..c5f8455 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
 							   n_get_bulk);
 				if (unlikely(ret < 0)) {
 					rte_mempool_dump(stdout, mp);
-					rte_ring_dump(stdout, mp->ring);
 					/* in this case, objects are lost... */
 					return -1;
 				}
diff --git a/doc/guides/prog_guide/mempool_lib.rst b/doc/guides/prog_guide/mempool_lib.rst
index c3afc2e..6e358d5 100644
--- a/doc/guides/prog_guide/mempool_lib.rst
+++ b/doc/guides/prog_guide/mempool_lib.rst
@@ -34,7 +34,7 @@ Mempool Library
 ===============
 
 A memory pool is an allocator of a fixed-sized object.
-In the DPDK, it is identified by name and uses a ring to store free objects.
+In the DPDK, it is identified by name and uses a ring or an external mempool manager to store free objects.
 It provides some other optional services such as a per-core object cache and
 an alignment helper to ensure that objects are padded to spread them equally on all DRAM or DDR3 channels.
 
@@ -127,6 +127,35 @@ The maximum size of the cache is static and is defined at compilation time (CONF
    A mempool in Memory with its Associated Ring
 
 
+External Mempool Manager
+------------------------
+
+This allows external memory subsystems, such as external hardware memory
+management systems and software based memory allocators, to be used with DPDK.
+
+There are two aspects to external mempool manager.
+
+* Adding the code for your new mempool operations (ops). This is achieved by
+  adding a new mempool ops code, and using the ``REGISTER_MEMPOOL_OPS`` macro.
+
+* Using the new API to call ``rte_mempool_create_empty()`` and
+  ``rte_mempool_set_ops_byname()`` to create a new mempool and specifying which
+  ops to use.
+
+Several external mempool managers may be used in the same application. A new
+mempool can be created by using the ``rte_mempool_create_empty()`` function,
+then using ``rte_mempool_set_ops_byname()`` to point the mempool to the
+relevant mempool manager callbacki (ops) structure.
+
+Legacy applications may continue to use the old ``rte_mempool_create()`` API
+call, which uses a ring based mempool manager by default. These applications
+will need to be modified to use a new external mempool manager.
+
+For applications that use ``rte_pktmbuf_create()``, there is a config setting
+(``RTE_MBUF_DEFAULT_MEMPOOL_OPS``) that allows the application to make use of
+an external mempool manager.
+
+
 Use Cases
 ---------
 
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index bda40c1..5708eef 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -45,15 +45,6 @@ Deprecation Notices
   compact API. The ones that remain are backwards compatible and use the
   per-lcore default cache if available. This change targets release 16.07.
 
-* The rte_mempool struct will be changed in 16.07 to facilitate the new
-  external mempool manager functionality.
-  The ring element will be replaced with a more generic 'pool' opaque pointer
-  to allow new mempool handlers to use their own user-defined mempool
-  layout. Also newly added to rte_mempool is a handler index.
-  The existing API will be backward compatible, but there will be new API
-  functions added to facilitate the creation of mempools using an external
-  handler. The 16.07 release will contain these changes.
-
 * A librte_vhost public structures refactor is planned for DPDK 16.07
   that requires both ABI and API change.
   The proposed refactor would expose DPDK vhost dev to applications as
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 43423e0..a4c089e 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -42,6 +42,8 @@ LIBABIVER := 2
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ring.c
 # install includes
 SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
 
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 22a5645..ac40cb3 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -148,7 +148,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, phys_addr_t physaddr)
 #endif
 
 	/* enqueue in ring */
-	rte_ring_sp_enqueue(mp->ring, obj);
+	rte_mempool_ops_enqueue_bulk(mp, &obj, 1);
 }
 
 /* call obj_cb() for each mempool element */
@@ -303,40 +303,6 @@ rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	return (size_t)paddr_idx << pg_shift;
 }
 
-/* create the internal ring */
-static int
-rte_mempool_ring_create(struct rte_mempool *mp)
-{
-	int rg_flags = 0, ret;
-	char rg_name[RTE_RING_NAMESIZE];
-	struct rte_ring *r;
-
-	ret = snprintf(rg_name, sizeof(rg_name),
-		RTE_MEMPOOL_MZ_FORMAT, mp->name);
-	if (ret < 0 || ret >= (int)sizeof(rg_name))
-		return -ENAMETOOLONG;
-
-	/* ring flags */
-	if (mp->flags & MEMPOOL_F_SP_PUT)
-		rg_flags |= RING_F_SP_ENQ;
-	if (mp->flags & MEMPOOL_F_SC_GET)
-		rg_flags |= RING_F_SC_DEQ;
-
-	/* Allocate the ring that will be used to store objects.
-	 * Ring functions will return appropriate errors if we are
-	 * running as a secondary process etc., so no checks made
-	 * in this function for that condition.
-	 */
-	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
-		mp->socket_id, rg_flags);
-	if (r == NULL)
-		return -rte_errno;
-
-	mp->ring = r;
-	mp->flags |= MEMPOOL_F_RING_CREATED;
-	return 0;
-}
-
 /* free a memchunk allocated with rte_memzone_reserve() */
 static void
 rte_mempool_memchunk_mz_free(__rte_unused struct rte_mempool_memhdr *memhdr,
@@ -354,7 +320,7 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
 	void *elt;
 
 	while (!STAILQ_EMPTY(&mp->elt_list)) {
-		rte_ring_sc_dequeue(mp->ring, &elt);
+		rte_mempool_ops_dequeue_bulk(mp, &elt, 1);
 		(void)elt;
 		STAILQ_REMOVE_HEAD(&mp->elt_list, next);
 		mp->populated_size--;
@@ -386,9 +352,9 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	int ret;
 
 	/* create the internal ring if not already done */
-	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
-		ret = rte_mempool_ring_create(mp);
-		if (ret < 0)
+	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
+		ret = rte_mempool_ops_alloc(mp);
+		if (ret != 0)
 			return ret;
 	}
 
@@ -703,7 +669,7 @@ rte_mempool_free(struct rte_mempool *mp)
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
 
 	rte_mempool_free_memchunks(mp);
-	rte_ring_free(mp->ring);
+	rte_mempool_ops_free(mp);
 	rte_memzone_free(mp->mz);
 }
 
@@ -815,6 +781,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 		RTE_PTR_ADD(mp, MEMPOOL_HEADER_SIZE(mp, 0));
 
 	te->data = mp;
+
 	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
 	TAILQ_INSERT_TAIL(mempool_list, te, next);
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
@@ -844,6 +811,19 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
 	if (mp == NULL)
 		return NULL;
 
+	/*
+	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
+	 * set the correct index into the table of ops structs.
+	 */
+	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
+		rte_mempool_set_ops_byname(mp, "ring_sp_sc");
+	else if (flags & MEMPOOL_F_SP_PUT)
+		rte_mempool_set_ops_byname(mp, "ring_sp_mc");
+	else if (flags & MEMPOOL_F_SC_GET)
+		rte_mempool_set_ops_byname(mp, "ring_mp_sc");
+	else
+		rte_mempool_set_ops_byname(mp, "ring_mp_mc");
+
 	/* call the mempool priv initializer */
 	if (mp_init)
 		mp_init(mp, mp_init_arg);
@@ -930,7 +910,7 @@ rte_mempool_count(const struct rte_mempool *mp)
 	unsigned count;
 	unsigned lcore_id;
 
-	count = rte_ring_count(mp->ring);
+	count = rte_mempool_ops_get_count(mp);
 
 	if (mp->cache_size == 0)
 		return count;
@@ -1119,7 +1099,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 
 	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
 	fprintf(f, "  flags=%x\n", mp->flags);
-	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
+	fprintf(f, "  pool=%p\n", mp->pool_data);
 	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->mz->phys_addr);
 	fprintf(f, "  nb_mem_chunks=%u\n", mp->nb_mem_chunks);
 	fprintf(f, "  size=%"PRIu32"\n", mp->size);
@@ -1140,7 +1120,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 	}
 
 	cache_count = rte_mempool_dump_cache(f, mp);
-	common_count = rte_ring_count(mp->ring);
+	common_count = rte_mempool_ops_get_count(mp);
 	if ((cache_count + common_count) > mp->size)
 		common_count = mp->size - cache_count;
 	fprintf(f, "  common_pool_count=%u\n", common_count);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 60339bd..e429f3f 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -67,6 +67,7 @@
 #include <inttypes.h>
 #include <sys/queue.h>
 
+#include <rte_spinlock.h>
 #include <rte_log.h>
 #include <rte_debug.h>
 #include <rte_lcore.h>
@@ -203,10 +204,13 @@ struct rte_mempool_memhdr {
  */
 struct rte_mempool {
 	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
-	struct rte_ring *ring;           /**< Ring to store objects. */
-	const struct rte_memzone *mz;    /**< Memzone where pool is allocated */
+	union {
+		void *pool_data;         /**< Ring or pool to store objects. */
+		uint64_t pool_id;        /**< External mempool identifier. */
+	};
+	const struct rte_memzone *mz;    /**< Memzone where pool is alloc'd. */
 	int flags;                       /**< Flags of the mempool. */
-	int socket_id;                   /**< Socket id passed at mempool creation. */
+	int socket_id;                   /**< Socket id passed at create. */
 	uint32_t size;                   /**< Max size of the mempool. */
 	uint32_t cache_size;             /**< Size of per-lcore local cache. */
 	uint32_t cache_flushthresh;
@@ -217,6 +221,14 @@ struct rte_mempool {
 	uint32_t trailer_size;           /**< Size of trailer (after elt). */
 
 	unsigned private_data_size;      /**< Size of private data. */
+	/**
+	 * Index into rte_mempool_ops_table array of mempool ops
+	 * structs, which contain callback function pointers.
+	 * We're using an index here rather than pointers to the callbacks
+	 * to facilitate any secondary processes that may want to use
+	 * this mempool.
+	 */
+	int32_t ops_index;
 
 	struct rte_mempool_cache *local_cache; /**< Per-lcore local cache */
 
@@ -235,7 +247,7 @@ struct rte_mempool {
 #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
 #define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
-#define MEMPOOL_F_RING_CREATED   0x0010 /**< Internal: ring is created */
+#define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
 
 /**
@@ -325,6 +337,212 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 #define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
 #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
 
+#define RTE_MEMPOOL_OPS_NAMESIZE 32 /**< Max length of ops struct name. */
+
+/**
+ * Prototype for implementation specific data provisioning function.
+ *
+ * The function should provide the implementation specific memory for
+ * for use by the other mempool ops functions in a given mempool ops struct.
+ * E.g. the default ops provides an instance of the rte_ring for this purpose.
+ * it will most likely point to a different type of data structure, and
+ * will be transparent to the application programmer.
+ * This function should set mp->pool_data.
+ */
+typedef int (*rte_mempool_alloc_t)(struct rte_mempool *mp);
+
+/**
+ * Free the opaque private data pointed to by mp->pool_data pointer.
+ */
+typedef void (*rte_mempool_free_t)(struct rte_mempool *mp);
+
+/**
+ * Enqueue an object into the external pool.
+ */
+typedef int (*rte_mempool_enqueue_t)(struct rte_mempool *mp,
+		void * const *obj_table, unsigned int n);
+
+/**
+ * Dequeue an object from the external pool.
+ */
+typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
+		void **obj_table, unsigned int n);
+
+/**
+ * Return the number of available objects in the external pool.
+ */
+typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
+
+/** Structure defining mempool operations structure */
+struct rte_mempool_ops {
+	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
+	rte_mempool_alloc_t alloc;       /**< Allocate private data. */
+	rte_mempool_free_t free;         /**< Free the external pool. */
+	rte_mempool_enqueue_t enqueue;   /**< Enqueue an object. */
+	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
+	rte_mempool_get_count get_count; /**< Get qty of available objs. */
+} __rte_cache_aligned;
+
+#define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
+
+/**
+ * Structure storing the table of registered ops structs, each of which contain
+ * the function pointers for the mempool ops functions.
+ * Each process has its own storage for this ops struct array so that
+ * the mempools can be shared across primary and secondary processes.
+ * The indices used to access the array are valid across processes, whereas
+ * any function pointers stored directly in the mempool struct would not be.
+ * This results in us simply having "ops_index" in the mempool struct.
+ */
+struct rte_mempool_ops_table {
+	rte_spinlock_t sl;     /**< Spinlock for add/delete. */
+	uint32_t num_ops;      /**< Number of used ops structs in the table. */
+	/**
+	 * Storage for all possible ops structs.
+	 */
+	struct rte_mempool_ops ops[RTE_MEMPOOL_MAX_OPS_IDX];
+} __rte_cache_aligned;
+
+/** Array of registered ops structs. */
+extern struct rte_mempool_ops_table rte_mempool_ops_table;
+
+/**
+ * @internal Get the mempool ops struct from its index.
+ *
+ * @param ops_index
+ *   The index of the ops struct in the ops struct table. It must be a valid
+ *   index: (0 <= idx < num_ops).
+ * @return
+ *   The pointer to the ops struct in the table.
+ */
+static inline struct rte_mempool_ops *
+rte_mempool_ops_get(int ops_index)
+{
+	RTE_VERIFY(ops_index < RTE_MEMPOOL_MAX_OPS_IDX);
+
+	return &rte_mempool_ops_table.ops[ops_index];
+}
+
+/**
+ * @internal Wrapper for mempool_ops alloc callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   - 0: Success; successfully allocated mempool pool_data.
+ *   - <0: Error; code of alloc function.
+ */
+int
+rte_mempool_ops_alloc(struct rte_mempool *mp);
+
+/**
+ * @internal Wrapper for mempool_ops get callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to get.
+ * @return
+ *   - 0: Success; got n objects.
+ *   - <0: Error; code of get function.
+ */
+static inline int
+rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
+		void **obj_table, unsigned n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->dequeue(mp, obj_table, n);
+}
+
+/**
+ * @internal wrapper for mempool_ops put callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to put.
+ * @return
+ *   - 0: Success; n objects supplied.
+ *   - <0: Error; code of put function.
+ */
+static inline int
+rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->enqueue(mp, obj_table, n);
+}
+
+/**
+ * @internal wrapper for mempool_ops get_count callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   The number of available objects in the external pool.
+ */
+unsigned
+rte_mempool_ops_get_count(const struct rte_mempool *mp);
+
+/**
+ * @internal wrapper for mempool_ops free callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ */
+void
+rte_mempool_ops_free(struct rte_mempool *mp);
+
+/**
+ * Set the ops of a mempool.
+ *
+ * This can only be done on a mempool that is not populated, i.e. just after
+ * a call to rte_mempool_create_empty().
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param name
+ *   Name of the ops structure to use for this mempool.
+ * @return
+ *   - 0: Success; the mempool is now using the requested ops functions.
+ *   - -EINVAL - Invalid ops struct name provided.
+ *   - -EEXIST - mempool already has an ops struct assigned.
+ */
+int
+rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name);
+
+/**
+ * Register mempool operations.
+ *
+ * @param h
+ *   Pointer to and ops structure to register.
+ * @return
+ *   - >=0: Success; return the index of the ops struct in the table.
+ *   - -EINVAL - some missing callbacks while registering ops struct.
+ *   - -ENOSPC - the maximum number of ops structs has been reached.
+ */
+int rte_mempool_ops_register(const struct rte_mempool_ops *ops);
+
+/**
+ * Macro to statically register the ops of an external mempool manager.
+ * Note that the rte_mempool_ops_register fails silently here when
+ * more then RTE_MEMPOOL_MAX_OPS_IDX is registered.
+ */
+#define MEMPOOL_REGISTER_OPS(ops)					\
+	void mp_hdlr_init_##ops(void);					\
+	void __attribute__((constructor, used)) mp_hdlr_init_##ops(void)\
+	{								\
+		rte_mempool_ops_register(&ops);			\
+	}
+
 /**
  * An object callback function for mempool.
  *
@@ -774,7 +992,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 	cache->len += n;
 
 	if (cache->len >= flushthresh) {
-		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
+		rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache_size],
 				cache->len - cache_size);
 		cache->len = cache_size;
 	}
@@ -785,19 +1003,10 @@ ring_enqueue:
 
 	/* push remaining objects in ring */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	if (is_mp) {
-		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
-	else {
-		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
+	if (rte_mempool_ops_enqueue_bulk(mp, obj_table, n) < 0)
+		rte_panic("cannot put objects in mempool\n");
 #else
-	if (is_mp)
-		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
-	else
-		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
+	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);
 #endif
 }
 
@@ -945,7 +1154,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 		uint32_t req = n + (cache_size - cache->len);
 
 		/* How many do we require i.e. number to fill the cache + the request */
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
+		ret = rte_mempool_ops_dequeue_bulk(mp,
+			&cache->objs[cache->len], req);
 		if (unlikely(ret < 0)) {
 			/*
 			 * In the offchance that we are buffer constrained,
@@ -972,10 +1182,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 ring_dequeue:
 
 	/* get remaining objects from ring */
-	if (is_mc)
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
-	else
-		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
+	ret = rte_mempool_ops_dequeue_bulk(mp, obj_table, n);
 
 	if (ret < 0)
 		__MEMPOOL_STAT_ADD(mp, get_fail, n);
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
new file mode 100644
index 0000000..9328b77
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -0,0 +1,148 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2016 6WIND S.A.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_mempool.h>
+#include <rte_errno.h>
+
+/* indirect jump table to support external memory pools. */
+struct rte_mempool_ops_table rte_mempool_ops_table = {
+	.sl =  RTE_SPINLOCK_INITIALIZER,
+	.num_ops = 0
+};
+
+/* add a new ops struct in rte_mempool_ops_table, return its index. */
+int
+rte_mempool_ops_register(const struct rte_mempool_ops *h)
+{
+	struct rte_mempool_ops *ops;
+	int16_t ops_index;
+
+	rte_spinlock_lock(&rte_mempool_ops_table.sl);
+
+	if (rte_mempool_ops_table.num_ops >=
+			RTE_MEMPOOL_MAX_OPS_IDX) {
+		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Maximum number of mempool ops structs exceeded\n");
+		return -ENOSPC;
+	}
+
+	if (h->alloc == NULL || h->enqueue == NULL ||
+			h->dequeue == NULL || h->get_count == NULL) {
+		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Missing callback while registering mempool ops\n");
+		return -EINVAL;
+	}
+
+	if (strlen(h->name) >= sizeof(ops->name) - 1) {
+		RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n",
+				__func__, h->name);
+		rte_errno = EEXIST;
+		return -EEXIST;
+	}
+
+	ops_index = rte_mempool_ops_table.num_ops++;
+	ops = &rte_mempool_ops_table.ops[ops_index];
+	snprintf(ops->name, sizeof(ops->name), "%s", h->name);
+	ops->alloc = h->alloc;
+	ops->enqueue = h->enqueue;
+	ops->dequeue = h->dequeue;
+	ops->get_count = h->get_count;
+
+	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+
+	return ops_index;
+}
+
+/* wrapper to allocate an external mempool's private (pool) data. */
+int
+rte_mempool_ops_alloc(struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->alloc(mp);
+}
+
+/* wrapper to free an external pool ops. */
+void
+rte_mempool_ops_free(struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	if (ops->free == NULL)
+		return;
+	return ops->free(mp);
+}
+
+/* wrapper to get available objects in an external mempool. */
+unsigned int
+rte_mempool_ops_get_count(const struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->get_count(mp);
+}
+
+/* sets mempool ops previously registered by rte_mempool_ops_register. */
+int
+rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)
+{
+	struct rte_mempool_ops *ops = NULL;
+	unsigned i;
+
+	/* too late, the mempool is already populated. */
+	if (mp->flags & MEMPOOL_F_POOL_CREATED)
+		return -EEXIST;
+
+	for (i = 0; i < rte_mempool_ops_table.num_ops; i++) {
+		if (!strcmp(name,
+				rte_mempool_ops_table.ops[i].name)) {
+			ops = &rte_mempool_ops_table.ops[i];
+			break;
+		}
+	}
+
+	if (ops == NULL)
+		return -EINVAL;
+
+	mp->ops_index = i;
+	return 0;
+}
diff --git a/lib/librte_mempool/rte_mempool_ring.c b/lib/librte_mempool/rte_mempool_ring.c
new file mode 100644
index 0000000..626786e
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_ring.c
@@ -0,0 +1,161 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+
+static int
+common_ring_mp_enqueue(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	return rte_ring_mp_enqueue_bulk(mp->pool_data, obj_table, n);
+}
+
+static int
+common_ring_sp_enqueue(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	return rte_ring_sp_enqueue_bulk(mp->pool_data, obj_table, n);
+}
+
+static int
+common_ring_mc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	return rte_ring_mc_dequeue_bulk(mp->pool_data, obj_table, n);
+}
+
+static int
+common_ring_sc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	return rte_ring_sc_dequeue_bulk(mp->pool_data, obj_table, n);
+}
+
+static unsigned
+common_ring_get_count(const struct rte_mempool *mp)
+{
+	return rte_ring_count(mp->pool_data);
+}
+
+
+static int
+common_ring_alloc(struct rte_mempool *mp)
+{
+	int rg_flags = 0, ret;
+	char rg_name[RTE_RING_NAMESIZE];
+	struct rte_ring *r;
+
+	ret = snprintf(rg_name, sizeof(rg_name),
+		RTE_MEMPOOL_MZ_FORMAT, mp->name);
+	if (ret < 0 || ret >= (int)sizeof(rg_name)) {
+		rte_errno = ENAMETOOLONG;
+		return -rte_errno;
+	}
+
+	/* ring flags */
+	if (mp->flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (mp->flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	/*
+	 * Allocate the ring that will be used to store objects.
+	 * Ring functions will return appropriate errors if we are
+	 * running as a secondary process etc., so no checks made
+	 * in this function for that condition.
+	 */
+	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+		mp->socket_id, rg_flags);
+	if (r == NULL)
+		return -rte_errno;
+
+	mp->pool_data = r;
+
+	return 0;
+}
+
+static void
+common_ring_free(struct rte_mempool *mp)
+{
+	rte_ring_free(mp->pool_data);
+}
+
+/*
+ * The following 4 declarations of mempool ops structs address
+ * the need for the backward compatible mempool managers for
+ * single/multi producers and single/multi consumers as dictated by the
+ * flags provided to the rte_mempool_create function
+ */
+static const struct rte_mempool_ops ops_mp_mc = {
+	.name = "ring_mp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_mp_enqueue,
+	.dequeue = common_ring_mc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_sp_sc = {
+	.name = "ring_sp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_sp_enqueue,
+	.dequeue = common_ring_sc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_mp_sc = {
+	.name = "ring_mp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_mp_enqueue,
+	.dequeue = common_ring_sc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_sp_mc = {
+	.name = "ring_sp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_sp_enqueue,
+	.dequeue = common_ring_mc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(ops_mp_mc);
+MEMPOOL_REGISTER_OPS(ops_sp_sc);
+MEMPOOL_REGISTER_OPS(ops_mp_sc);
+MEMPOOL_REGISTER_OPS(ops_sp_mc);
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index f63461b..6209ec2 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -20,15 +20,18 @@ DPDK_16.7 {
 	global:
 
 	rte_mempool_check_cookies;
-	rte_mempool_obj_iter;
-	rte_mempool_mem_iter;
 	rte_mempool_create_empty;
+	rte_mempool_free;
+	rte_mempool_mem_iter;
+	rte_mempool_obj_iter;
+	rte_mempool_ops_register;
+	rte_mempool_ops_table;
+	rte_mempool_populate_anon;
+	rte_mempool_populate_default;
 	rte_mempool_populate_phys;
 	rte_mempool_populate_phys_tab;
 	rte_mempool_populate_virt;
-	rte_mempool_populate_default;
-	rte_mempool_populate_anon;
-	rte_mempool_free;
+	rte_mempool_set_ops_byname;
 
 	local: *;
 } DPDK_2.0;
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v11 2/3] app/test: test external mempool manager
  2016-06-14 15:48                   ` [PATCH v11 " David Hunt
  2016-06-14 15:48                     ` [PATCH v11 1/3] mempool: support external mempool operations David Hunt
@ 2016-06-14 15:49                     ` David Hunt
  2016-06-14 15:49                     ` [PATCH v11 3/3] mbuf: make default mempool ops configurable at build David Hunt
  2016-06-15  7:47                     ` [PATCH v12 0/3] mempool: add external mempool manager David Hunt
  3 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-06-14 15:49 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain, David Hunt

Use a minimal custom mempool external ops and check that it also
passes basic mempool autotests.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test/test_mempool.c | 122 +++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 120 insertions(+), 2 deletions(-)

diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index b586249..bcf379b 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -83,6 +83,99 @@
 static rte_atomic32_t synchro;
 
 /*
+ * Simple example of custom mempool structure. Holds pointers to all the
+ * elements which are simply malloc'd in this example.
+ */
+struct custom_mempool {
+	rte_spinlock_t lock;
+	unsigned count;
+	unsigned size;
+	void *elts[];
+};
+
+/*
+ * Loop through all the element pointers and allocate a chunk of memory, then
+ * insert that memory into the ring.
+ */
+static int
+custom_mempool_alloc(struct rte_mempool *mp)
+{
+	struct custom_mempool *cm;
+
+	cm = rte_zmalloc("custom_mempool",
+		sizeof(struct custom_mempool) + mp->size * sizeof(void *), 0);
+	if (cm == NULL)
+		return -ENOMEM;
+
+	rte_spinlock_init(&cm->lock);
+	cm->count = 0;
+	cm->size = mp->size;
+	mp->pool_data = cm;
+	return 0;
+}
+
+static void
+custom_mempool_free(struct rte_mempool *mp)
+{
+	rte_free((void *)(mp->pool_data));
+}
+
+static int
+custom_mempool_enqueue(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (cm->count + n > cm->size) {
+		ret = -ENOBUFS;
+	} else {
+		memcpy(&cm->elts[cm->count], obj_table, sizeof(void *) * n);
+		cm->count += n;
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+
+static int
+custom_mempool_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (n > cm->count) {
+		ret = -ENOENT;
+	} else {
+		cm->count -= n;
+		memcpy(obj_table, &cm->elts[cm->count], sizeof(void *) * n);
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+static unsigned
+custom_mempool_get_count(const struct rte_mempool *mp)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+
+	return cm->count;
+}
+
+static struct rte_mempool_ops mempool_ops_custom = {
+	.name = "custom_handler",
+	.alloc = custom_mempool_alloc,
+	.free = custom_mempool_free,
+	.enqueue = custom_mempool_enqueue,
+	.dequeue = custom_mempool_dequeue,
+	.get_count = custom_mempool_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(mempool_ops_custom);
+
+/*
  * save the object number in the first 4 bytes of object data. All
  * other bytes are set to 0.
  */
@@ -292,12 +385,14 @@ static int test_mempool_single_consumer(void)
  * test function for mempool test based on singple consumer and single producer,
  * can run on one lcore only
  */
-static int test_mempool_launch_single_consumer(__attribute__((unused)) void *arg)
+static int
+test_mempool_launch_single_consumer(__attribute__((unused)) void *arg)
 {
 	return test_mempool_single_consumer();
 }
 
-static void my_mp_init(struct rte_mempool * mp, __attribute__((unused)) void * arg)
+static void
+my_mp_init(struct rte_mempool *mp, __attribute__((unused)) void *arg)
 {
 	printf("mempool name is %s\n", mp->name);
 	/* nothing to be implemented here*/
@@ -477,6 +572,7 @@ test_mempool(void)
 {
 	struct rte_mempool *mp_cache = NULL;
 	struct rte_mempool *mp_nocache = NULL;
+	struct rte_mempool *mp_ext = NULL;
 
 	rte_atomic32_init(&synchro);
 
@@ -505,6 +601,27 @@ test_mempool(void)
 		goto err;
 	}
 
+	/* create a mempool with an external handler */
+	mp_ext = rte_mempool_create_empty("test_ext",
+		MEMPOOL_SIZE,
+		MEMPOOL_ELT_SIZE,
+		RTE_MEMPOOL_CACHE_MAX_SIZE, 0,
+		SOCKET_ID_ANY, 0);
+
+	if (mp_ext == NULL) {
+		printf("cannot allocate mp_ext mempool\n");
+		goto err;
+	}
+	if (rte_mempool_set_ops_byname(mp_ext, "custom_handler") < 0) {
+		printf("cannot set custom handler\n");
+		goto err;
+	}
+	if (rte_mempool_populate_default(mp_ext) < 0) {
+		printf("cannot populate mp_ext mempool\n");
+		goto err;
+	}
+	rte_mempool_obj_iter(mp_ext, my_obj_init, NULL);
+
 	/* retrieve the mempool from its name */
 	if (rte_mempool_lookup("test_nocache") != mp_nocache) {
 		printf("Cannot lookup mempool from its name\n");
@@ -545,6 +662,7 @@ test_mempool(void)
 err:
 	rte_mempool_free(mp_nocache);
 	rte_mempool_free(mp_cache);
+	rte_mempool_free(mp_ext);
 	return -1;
 }
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v11 3/3] mbuf: make default mempool ops configurable at build
  2016-06-14 15:48                   ` [PATCH v11 " David Hunt
  2016-06-14 15:48                     ` [PATCH v11 1/3] mempool: support external mempool operations David Hunt
  2016-06-14 15:49                     ` [PATCH v11 2/3] app/test: test external mempool manager David Hunt
@ 2016-06-14 15:49                     ` David Hunt
  2016-06-15  7:47                     ` [PATCH v12 0/3] mempool: add external mempool manager David Hunt
  3 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-06-14 15:49 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain, David Hunt

By default, the mempool ops used for mbuf allocations is a multi
producer and multi consumer ring. We could imagine a target (maybe some
network processors?) that provides an hardware-assisted pool
mechanism. In this case, the default configuration for this architecture
would contain a different value for RTE_MBUF_DEFAULT_MEMPOOL_OPS.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 config/common_base         |  1 +
 lib/librte_mbuf/rte_mbuf.c | 26 ++++++++++++++++++++++----
 2 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/config/common_base b/config/common_base
index 47c26f6..899c038 100644
--- a/config/common_base
+++ b/config/common_base
@@ -394,6 +394,7 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
 #
 CONFIG_RTE_LIBRTE_MBUF=y
 CONFIG_RTE_LIBRTE_MBUF_DEBUG=n
+CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="ring_mp_mc"
 CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index eec1456..491230c 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -153,6 +153,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	unsigned cache_size, uint16_t priv_size, uint16_t data_room_size,
 	int socket_id)
 {
+	struct rte_mempool *mp;
 	struct rte_pktmbuf_pool_private mbp_priv;
 	unsigned elt_size;
 
@@ -167,10 +168,27 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	mbp_priv.mbuf_data_room_size = data_room_size;
 	mbp_priv.mbuf_priv_size = priv_size;
 
-	return rte_mempool_create(name, n, elt_size,
-		cache_size, sizeof(struct rte_pktmbuf_pool_private),
-		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
-		socket_id, 0);
+	mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
+		 sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
+	if (mp == NULL)
+		return NULL;
+
+	rte_errno = rte_mempool_set_ops_byname(mp,
+			RTE_MBUF_DEFAULT_MEMPOOL_OPS);
+	if (rte_errno != 0) {
+		RTE_LOG(ERR, MBUF, "error setting mempool handler\n");
+		return NULL;
+	}
+	rte_pktmbuf_pool_init(mp, &mbp_priv);
+
+	if (rte_mempool_populate_default(mp) < 0) {
+		rte_mempool_free(mp);
+		return NULL;
+	}
+
+	rte_mempool_obj_iter(mp, rte_pktmbuf_init, NULL);
+
+	return mp;
 }
 
 /* do some sanity checks on a mbuf: panic if it fails */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* Re: [PATCH v11 1/3] mempool: support external mempool operations
  2016-06-14 15:48                     ` [PATCH v11 1/3] mempool: support external mempool operations David Hunt
@ 2016-06-14 16:08                       ` Thomas Monjalon
  0 siblings, 0 replies; 238+ messages in thread
From: Thomas Monjalon @ 2016-06-14 16:08 UTC (permalink / raw)
  To: David Hunt; +Cc: dev, olivier.matz, viktorin, jerin.jacob, shreyansh.jain

2016-06-14 16:48, David Hunt:
> +Several external mempool managers may be used in the same application. A new
> +mempool can be created by using the ``rte_mempool_create_empty()`` function,
> +then using ``rte_mempool_set_ops_byname()`` to point the mempool to the
> +relevant mempool manager callbacki (ops) structure.

vim typo: callbacki

> +/**
> + * Register mempool operations.
> + *
> + * @param h
> + *   Pointer to and ops structure to register.

Same error as in v10.

> + * @return
> + *   - >=0: Success; return the index of the ops struct in the table.
> + *   - -EINVAL - some missing callbacks while registering ops struct.
> + *   - -ENOSPC - the maximum number of ops structs has been reached.
> + */
> +int rte_mempool_ops_register(const struct rte_mempool_ops *ops);

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v12 0/3] mempool: add external mempool manager
  2016-06-14 15:48                   ` [PATCH v11 " David Hunt
                                       ` (2 preceding siblings ...)
  2016-06-14 15:49                     ` [PATCH v11 3/3] mbuf: make default mempool ops configurable at build David Hunt
@ 2016-06-15  7:47                     ` David Hunt
  2016-06-15  7:47                       ` [PATCH v12 1/3] mempool: support external mempool operations David Hunt
                                         ` (4 more replies)
  3 siblings, 5 replies; 238+ messages in thread
From: David Hunt @ 2016-06-15  7:47 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain

Here's the latest version of the External Mempool Manager patchset.
It's re-based on top of the latest head as of 14/6/2016, including
Olivier's 35-part patch series on mempool re-org [1]

[1] http://dpdk.org/ml/archives/dev/2016-May/039229.html

v12 changes:

 * Fixed a comment (function pram h -> ops)
 * Fixed a typo in mempool docs (callbacki)

v11 changes:

 * Fixed comments (added '.' where needed for consistency)
 * removed ABI breakage notice for mempool manager in deprecation.rst
 * Added description of the external mempool manager functionality to
   doc/guides/prog_guide/mempool_lib.rst (John Mc reviewed)
 * renamed rte_mempool_default.c to rte_mempool_ring.c

v10 changes:

 * changed the _put/_get op names to _enqueue/_dequeue to be consistent
   with the function names
 * some rte_errno cleanup
 * comment tweaks about when to set pool_data
 * removed an un-needed check for ops->alloc == NULL

v9 changes:

 * added a check for NULL alloc in rte_mempool_ops_register
 * rte_mempool_alloc_t now returns int instead of void*
 * fixed some comment typo's
 * removed some unneeded typecasts
 * changed a return NULL to return -EEXIST in rte_mempool_ops_register
 * fixed rte_mempool_version.map file so builds ok as shared libs
 * moved flags check from rte_mempool_create_empty to rte_mempool_create

v8 changes:

 * merged first three patches in the series into one.
 * changed parameters to ops callback to all be rte_mempool pointer
   rather than than pointer to opaque data or uint64.
 * comment fixes.
 * fixed parameter to _free function (was inconsistent).
 * changed MEMPOOL_F_RING_CREATED to MEMPOOL_F_POOL_CREATED

v7 changes:

 * Changed rte_mempool_handler_table to rte_mempool_ops_table
 * Changed hander_idx to ops_index in rte_mempool struct
 * Reworked comments in rte_mempool.h around ops functions
 * Changed rte_mempool_hander.c to rte_mempool_ops.c
 * Changed all functions containing _handler_ to _ops_
 * Now there is no mention of 'handler' left
 * Other small changes out of review of mailing list

v6 changes:

 * Moved the flags handling from rte_mempool_create_empty to
   rte_mempool_create, as it's only there for backward compatibility
 * Various comment additions and cleanup
 * Renamed rte_mempool_handler to rte_mempool_ops
 * Added a union for *pool and u64 pool_id in struct rte_mempool
 * split the original patch into a few parts for easier review.
 * rename functions with _ext_ to _ops_.
 * addressed review comments
 * renamed put and get functions to enqueue and dequeue
 * changed occurences of rte_mempool_ops to const, as they
   contain function pointers (security)
 * split out the default external mempool handler into a separate
   patch for easier review

v5 changes:
 * rebasing, as it is dependent on another patch series [1]

v4 changes (Olivier Matz):
 * remove the rte_mempool_create_ext() function. To change the handler, the
   user has to do the following:
   - mp = rte_mempool_create_empty()
   - rte_mempool_set_handler(mp, "my_handler")
   - rte_mempool_populate_default(mp)
   This avoids to add another function with more than 10 arguments, duplicating
   the doxygen comments
 * change the api of rte_mempool_alloc_t: only the mempool pointer is required
   as all information is available in it
 * change the api of rte_mempool_free_t: remove return value
 * move inline wrapper functions from the .c to the .h (else they won't be
   inlined). This implies to have one header file (rte_mempool.h), or it
   would have generate cross dependencies issues.
 * remove now unused MEMPOOL_F_INT_HANDLER (note: it was misused anyway due
   to the use of && instead of &)
 * fix build in debug mode (__MEMPOOL_STAT_ADD(mp, put_pool, n) remaining)
 * fix build with shared libraries (global handler has to be declared in
   the .map file)
 * rationalize #include order
 * remove unused function rte_mempool_get_handler_name()
 * rename some structures, fields, functions
 * remove the static in front of rte_tailq_elem rte_mempool_tailq (comment
   from Yuanhan)
 * test the ext mempool handler in the same file than standard mempool tests,
   avoiding to duplicate the code
 * rework the custom handler in mempool_test
 * rework a bit the patch selecting default mbuf pool handler
 * fix some doxygen comments

v3 changes:
 * simplified the file layout, renamed to rte_mempool_handler.[hc]
 * moved the default handlers into rte_mempool_default.c
 * moved the example handler out into app/test/test_ext_mempool.c
 * removed is_mc/is_mp change, slight perf degredation on sp cached operation
 * removed stack hanler, may re-introduce at a later date
 * Changes out of code reviews

v2 changes:
 * There was a lot of duplicate code between rte_mempool_xmem_create and
   rte_mempool_create_ext. This has now been refactored and is now
   hopefully cleaner.
 * The RTE_NEXT_ABI define is now used to allow building of the library
   in a format that is compatible with binaries built against previous
   versions of DPDK.
 * Changes out of code reviews. Hopefully I've got most of them included.

The External Mempool Manager is an extension to the mempool API that allows
users to add and use an external mempool manager, which allows external memory
subsystems such as external hardware memory management systems and software
based memory allocators to be used with DPDK.

The existing API to the internal DPDK mempool manager will remain unchanged
and will be backward compatible. However, there will be an ABI breakage, as
the mempool struct is changing. These changes are all contained withing
RTE_NEXT_ABI defs, and the current or next code can be changed with
the CONFIG_RTE_NEXT_ABI config setting

There are two aspects to external mempool manager.
  1. Adding the code for your new mempool operations (ops). This is
     achieved by adding a new mempool ops source file into the
     librte_mempool library, and using the REGISTER_MEMPOOL_OPS macro.
  2. Using the new API to call rte_mempool_create_empty and
     rte_mempool_set_ops_byname to create a new mempool
     using the name parameter to identify which ops to use.

New API calls added
 1. A new rte_mempool_create_empty() function
 2. rte_mempool_set_ops_byname() which sets the mempool's ops (functions)
 3. An rte_mempool_populate_default() and rte_mempool_populate_anon() functions
    which populates the mempool using the relevant ops

Several external mempool managers may be used in the same application. A new
mempool can then be created by using the new rte_mempool_create_empty function,
then calling rte_mempool_set_ops_byname to point the mempool to the relevant
mempool manager callback structure.

Legacy applications will continue to use the old rte_mempool_create API call,
which uses a ring based mempool manager by default. These applications
will need to be modified to use a new external mempool manager.

The external mempool manager needs to provide the following functions.
 1. alloc     - allocates the mempool memory, and adds each object onto a ring
 2. enqueue   - puts an object back into the mempool once an application has
                finished with it
 3. dequeue   - gets an object from the mempool for use by the application
 4. get_count - gets the number of available objects in the mempool
 5. free      - frees the mempool memory

Every time an enqueue/dequeue/get_count is called from the application/PMD,
the callback for that mempool is called. These functions are in the fastpath,
and any unoptimised ops may limit performance.

The new APIs are as follows:

1. rte_mempool_create_empty

struct rte_mempool *
rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
    unsigned cache_size, unsigned private_data_size,
    int socket_id, unsigned flags);

2. rte_mempool_set_ops_byname()

int
rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name);

3. rte_mempool_populate_default()

int rte_mempool_populate_default(struct rte_mempool *mp);

4. rte_mempool_populate_anon()

int rte_mempool_populate_anon(struct rte_mempool *mp);

Please see rte_mempool.h for further information on the parameters.


The important thing to note is that the mempool ops struct is passed by name
to rte_mempool_set_ops_byname, which looks through the ops struct array to
get the ops_index, which is then stored in the rte_memool structure. This
allow multiple processes to use the same mempool, as the function pointers
are accessed via ops index.

The mempool ops structure contains callbacks to the implementation of
the ops function, and is set up for registration as follows:

static const struct rte_mempool_ops ops_sp_mc = {
    .name = "ring_sp_mc",
    .alloc = rte_mempool_common_ring_alloc,
    .enqueue = common_ring_sp_enqueue,
    .dequeue = common_ring_mc_dequeue,
    .get_count = common_ring_get_count,
    .free = common_ring_free,
};

And then the following macro will register the ops in the array of ops
structures

REGISTER_MEMPOOL_OPS(ops_mp_mc);

For an example of API usage, please see app/test/test_mempool.c, which
implements a rudimentary "custom_handler" mempool manager using simple mallocs
for each mempool object. This file also contains the callbacks and self
registration for the new handler.

David Hunt (2):
  mempool: support external mempool operations
  mbuf: make default mempool ops configurable at build

Olivier Matz (1):
  app/test: test external mempool manager

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v12 1/3] mempool: support external mempool operations
  2016-06-15  7:47                     ` [PATCH v12 0/3] mempool: add external mempool manager David Hunt
@ 2016-06-15  7:47                       ` David Hunt
  2016-06-15 10:14                         ` Jan Viktorin
  2016-06-15  7:47                       ` [PATCH v12 2/3] app/test: test external mempool manager David Hunt
                                         ` (3 subsequent siblings)
  4 siblings, 1 reply; 238+ messages in thread
From: David Hunt @ 2016-06-15  7:47 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain, David Hunt

Until now, the objects stored in a mempool were internally stored in a
ring. This patch introduces the possibility to register external handlers
replacing the ring.

The default behavior remains unchanged, but calling the new function
rte_mempool_set_ops_byname() right after rte_mempool_create_empty() allows
the user to change the handler that will be used when populating
the mempool.

This patch also adds a set of default ops (function callbacks) based
on rte_ring.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test/test_mempool_perf.c               |   1 -
 doc/guides/prog_guide/mempool_lib.rst      |  31 +++-
 doc/guides/rel_notes/deprecation.rst       |   9 --
 lib/librte_mempool/Makefile                |   2 +
 lib/librte_mempool/rte_mempool.c           |  66 +++-----
 lib/librte_mempool/rte_mempool.h           | 251 ++++++++++++++++++++++++++---
 lib/librte_mempool/rte_mempool_ops.c       | 148 +++++++++++++++++
 lib/librte_mempool/rte_mempool_ring.c      | 161 ++++++++++++++++++
 lib/librte_mempool/rte_mempool_version.map |  13 +-
 9 files changed, 601 insertions(+), 81 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_ops.c
 create mode 100644 lib/librte_mempool/rte_mempool_ring.c

diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index c5e3576..c5f8455 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
 							   n_get_bulk);
 				if (unlikely(ret < 0)) {
 					rte_mempool_dump(stdout, mp);
-					rte_ring_dump(stdout, mp->ring);
 					/* in this case, objects are lost... */
 					return -1;
 				}
diff --git a/doc/guides/prog_guide/mempool_lib.rst b/doc/guides/prog_guide/mempool_lib.rst
index c3afc2e..2e3116e 100644
--- a/doc/guides/prog_guide/mempool_lib.rst
+++ b/doc/guides/prog_guide/mempool_lib.rst
@@ -34,7 +34,7 @@ Mempool Library
 ===============
 
 A memory pool is an allocator of a fixed-sized object.
-In the DPDK, it is identified by name and uses a ring to store free objects.
+In the DPDK, it is identified by name and uses a ring or an external mempool manager to store free objects.
 It provides some other optional services such as a per-core object cache and
 an alignment helper to ensure that objects are padded to spread them equally on all DRAM or DDR3 channels.
 
@@ -127,6 +127,35 @@ The maximum size of the cache is static and is defined at compilation time (CONF
    A mempool in Memory with its Associated Ring
 
 
+External Mempool Manager
+------------------------
+
+This allows external memory subsystems, such as external hardware memory
+management systems and software based memory allocators, to be used with DPDK.
+
+There are two aspects to external mempool manager.
+
+* Adding the code for your new mempool operations (ops). This is achieved by
+  adding a new mempool ops code, and using the ``REGISTER_MEMPOOL_OPS`` macro.
+
+* Using the new API to call ``rte_mempool_create_empty()`` and
+  ``rte_mempool_set_ops_byname()`` to create a new mempool and specifying which
+  ops to use.
+
+Several external mempool managers may be used in the same application. A new
+mempool can be created by using the ``rte_mempool_create_empty()`` function,
+then using ``rte_mempool_set_ops_byname()`` to point the mempool to the
+relevant mempool manager callback (ops) structure.
+
+Legacy applications may continue to use the old ``rte_mempool_create()`` API
+call, which uses a ring based mempool manager by default. These applications
+will need to be modified to use a new external mempool manager.
+
+For applications that use ``rte_pktmbuf_create()``, there is a config setting
+(``RTE_MBUF_DEFAULT_MEMPOOL_OPS``) that allows the application to make use of
+an external mempool manager.
+
+
 Use Cases
 ---------
 
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 7d947ae..c415095 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -39,15 +39,6 @@ Deprecation Notices
   compact API. The ones that remain are backwards compatible and use the
   per-lcore default cache if available. This change targets release 16.07.
 
-* The rte_mempool struct will be changed in 16.07 to facilitate the new
-  external mempool manager functionality.
-  The ring element will be replaced with a more generic 'pool' opaque pointer
-  to allow new mempool handlers to use their own user-defined mempool
-  layout. Also newly added to rte_mempool is a handler index.
-  The existing API will be backward compatible, but there will be new API
-  functions added to facilitate the creation of mempools using an external
-  handler. The 16.07 release will contain these changes.
-
 * A librte_vhost public structures refactor is planned for DPDK 16.07
   that requires both ABI and API change.
   The proposed refactor would expose DPDK vhost dev to applications as
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 43423e0..a4c089e 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -42,6 +42,8 @@ LIBABIVER := 2
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ring.c
 # install includes
 SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
 
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 22a5645..ac40cb3 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -148,7 +148,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, phys_addr_t physaddr)
 #endif
 
 	/* enqueue in ring */
-	rte_ring_sp_enqueue(mp->ring, obj);
+	rte_mempool_ops_enqueue_bulk(mp, &obj, 1);
 }
 
 /* call obj_cb() for each mempool element */
@@ -303,40 +303,6 @@ rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	return (size_t)paddr_idx << pg_shift;
 }
 
-/* create the internal ring */
-static int
-rte_mempool_ring_create(struct rte_mempool *mp)
-{
-	int rg_flags = 0, ret;
-	char rg_name[RTE_RING_NAMESIZE];
-	struct rte_ring *r;
-
-	ret = snprintf(rg_name, sizeof(rg_name),
-		RTE_MEMPOOL_MZ_FORMAT, mp->name);
-	if (ret < 0 || ret >= (int)sizeof(rg_name))
-		return -ENAMETOOLONG;
-
-	/* ring flags */
-	if (mp->flags & MEMPOOL_F_SP_PUT)
-		rg_flags |= RING_F_SP_ENQ;
-	if (mp->flags & MEMPOOL_F_SC_GET)
-		rg_flags |= RING_F_SC_DEQ;
-
-	/* Allocate the ring that will be used to store objects.
-	 * Ring functions will return appropriate errors if we are
-	 * running as a secondary process etc., so no checks made
-	 * in this function for that condition.
-	 */
-	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
-		mp->socket_id, rg_flags);
-	if (r == NULL)
-		return -rte_errno;
-
-	mp->ring = r;
-	mp->flags |= MEMPOOL_F_RING_CREATED;
-	return 0;
-}
-
 /* free a memchunk allocated with rte_memzone_reserve() */
 static void
 rte_mempool_memchunk_mz_free(__rte_unused struct rte_mempool_memhdr *memhdr,
@@ -354,7 +320,7 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
 	void *elt;
 
 	while (!STAILQ_EMPTY(&mp->elt_list)) {
-		rte_ring_sc_dequeue(mp->ring, &elt);
+		rte_mempool_ops_dequeue_bulk(mp, &elt, 1);
 		(void)elt;
 		STAILQ_REMOVE_HEAD(&mp->elt_list, next);
 		mp->populated_size--;
@@ -386,9 +352,9 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	int ret;
 
 	/* create the internal ring if not already done */
-	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
-		ret = rte_mempool_ring_create(mp);
-		if (ret < 0)
+	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
+		ret = rte_mempool_ops_alloc(mp);
+		if (ret != 0)
 			return ret;
 	}
 
@@ -703,7 +669,7 @@ rte_mempool_free(struct rte_mempool *mp)
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
 
 	rte_mempool_free_memchunks(mp);
-	rte_ring_free(mp->ring);
+	rte_mempool_ops_free(mp);
 	rte_memzone_free(mp->mz);
 }
 
@@ -815,6 +781,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 		RTE_PTR_ADD(mp, MEMPOOL_HEADER_SIZE(mp, 0));
 
 	te->data = mp;
+
 	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
 	TAILQ_INSERT_TAIL(mempool_list, te, next);
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
@@ -844,6 +811,19 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
 	if (mp == NULL)
 		return NULL;
 
+	/*
+	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
+	 * set the correct index into the table of ops structs.
+	 */
+	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
+		rte_mempool_set_ops_byname(mp, "ring_sp_sc");
+	else if (flags & MEMPOOL_F_SP_PUT)
+		rte_mempool_set_ops_byname(mp, "ring_sp_mc");
+	else if (flags & MEMPOOL_F_SC_GET)
+		rte_mempool_set_ops_byname(mp, "ring_mp_sc");
+	else
+		rte_mempool_set_ops_byname(mp, "ring_mp_mc");
+
 	/* call the mempool priv initializer */
 	if (mp_init)
 		mp_init(mp, mp_init_arg);
@@ -930,7 +910,7 @@ rte_mempool_count(const struct rte_mempool *mp)
 	unsigned count;
 	unsigned lcore_id;
 
-	count = rte_ring_count(mp->ring);
+	count = rte_mempool_ops_get_count(mp);
 
 	if (mp->cache_size == 0)
 		return count;
@@ -1119,7 +1099,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 
 	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
 	fprintf(f, "  flags=%x\n", mp->flags);
-	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
+	fprintf(f, "  pool=%p\n", mp->pool_data);
 	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->mz->phys_addr);
 	fprintf(f, "  nb_mem_chunks=%u\n", mp->nb_mem_chunks);
 	fprintf(f, "  size=%"PRIu32"\n", mp->size);
@@ -1140,7 +1120,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 	}
 
 	cache_count = rte_mempool_dump_cache(f, mp);
-	common_count = rte_ring_count(mp->ring);
+	common_count = rte_mempool_ops_get_count(mp);
 	if ((cache_count + common_count) > mp->size)
 		common_count = mp->size - cache_count;
 	fprintf(f, "  common_pool_count=%u\n", common_count);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 60339bd..92deb42 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -67,6 +67,7 @@
 #include <inttypes.h>
 #include <sys/queue.h>
 
+#include <rte_spinlock.h>
 #include <rte_log.h>
 #include <rte_debug.h>
 #include <rte_lcore.h>
@@ -203,10 +204,13 @@ struct rte_mempool_memhdr {
  */
 struct rte_mempool {
 	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
-	struct rte_ring *ring;           /**< Ring to store objects. */
-	const struct rte_memzone *mz;    /**< Memzone where pool is allocated */
+	union {
+		void *pool_data;         /**< Ring or pool to store objects. */
+		uint64_t pool_id;        /**< External mempool identifier. */
+	};
+	const struct rte_memzone *mz;    /**< Memzone where pool is alloc'd. */
 	int flags;                       /**< Flags of the mempool. */
-	int socket_id;                   /**< Socket id passed at mempool creation. */
+	int socket_id;                   /**< Socket id passed at create. */
 	uint32_t size;                   /**< Max size of the mempool. */
 	uint32_t cache_size;             /**< Size of per-lcore local cache. */
 	uint32_t cache_flushthresh;
@@ -217,6 +221,14 @@ struct rte_mempool {
 	uint32_t trailer_size;           /**< Size of trailer (after elt). */
 
 	unsigned private_data_size;      /**< Size of private data. */
+	/**
+	 * Index into rte_mempool_ops_table array of mempool ops
+	 * structs, which contain callback function pointers.
+	 * We're using an index here rather than pointers to the callbacks
+	 * to facilitate any secondary processes that may want to use
+	 * this mempool.
+	 */
+	int32_t ops_index;
 
 	struct rte_mempool_cache *local_cache; /**< Per-lcore local cache */
 
@@ -235,7 +247,7 @@ struct rte_mempool {
 #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
 #define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
-#define MEMPOOL_F_RING_CREATED   0x0010 /**< Internal: ring is created */
+#define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
 
 /**
@@ -325,6 +337,212 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 #define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
 #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
 
+#define RTE_MEMPOOL_OPS_NAMESIZE 32 /**< Max length of ops struct name. */
+
+/**
+ * Prototype for implementation specific data provisioning function.
+ *
+ * The function should provide the implementation specific memory for
+ * for use by the other mempool ops functions in a given mempool ops struct.
+ * E.g. the default ops provides an instance of the rte_ring for this purpose.
+ * it will most likely point to a different type of data structure, and
+ * will be transparent to the application programmer.
+ * This function should set mp->pool_data.
+ */
+typedef int (*rte_mempool_alloc_t)(struct rte_mempool *mp);
+
+/**
+ * Free the opaque private data pointed to by mp->pool_data pointer.
+ */
+typedef void (*rte_mempool_free_t)(struct rte_mempool *mp);
+
+/**
+ * Enqueue an object into the external pool.
+ */
+typedef int (*rte_mempool_enqueue_t)(struct rte_mempool *mp,
+		void * const *obj_table, unsigned int n);
+
+/**
+ * Dequeue an object from the external pool.
+ */
+typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
+		void **obj_table, unsigned int n);
+
+/**
+ * Return the number of available objects in the external pool.
+ */
+typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
+
+/** Structure defining mempool operations structure */
+struct rte_mempool_ops {
+	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
+	rte_mempool_alloc_t alloc;       /**< Allocate private data. */
+	rte_mempool_free_t free;         /**< Free the external pool. */
+	rte_mempool_enqueue_t enqueue;   /**< Enqueue an object. */
+	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
+	rte_mempool_get_count get_count; /**< Get qty of available objs. */
+} __rte_cache_aligned;
+
+#define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
+
+/**
+ * Structure storing the table of registered ops structs, each of which contain
+ * the function pointers for the mempool ops functions.
+ * Each process has its own storage for this ops struct array so that
+ * the mempools can be shared across primary and secondary processes.
+ * The indices used to access the array are valid across processes, whereas
+ * any function pointers stored directly in the mempool struct would not be.
+ * This results in us simply having "ops_index" in the mempool struct.
+ */
+struct rte_mempool_ops_table {
+	rte_spinlock_t sl;     /**< Spinlock for add/delete. */
+	uint32_t num_ops;      /**< Number of used ops structs in the table. */
+	/**
+	 * Storage for all possible ops structs.
+	 */
+	struct rte_mempool_ops ops[RTE_MEMPOOL_MAX_OPS_IDX];
+} __rte_cache_aligned;
+
+/** Array of registered ops structs. */
+extern struct rte_mempool_ops_table rte_mempool_ops_table;
+
+/**
+ * @internal Get the mempool ops struct from its index.
+ *
+ * @param ops_index
+ *   The index of the ops struct in the ops struct table. It must be a valid
+ *   index: (0 <= idx < num_ops).
+ * @return
+ *   The pointer to the ops struct in the table.
+ */
+static inline struct rte_mempool_ops *
+rte_mempool_ops_get(int ops_index)
+{
+	RTE_VERIFY(ops_index < RTE_MEMPOOL_MAX_OPS_IDX);
+
+	return &rte_mempool_ops_table.ops[ops_index];
+}
+
+/**
+ * @internal Wrapper for mempool_ops alloc callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   - 0: Success; successfully allocated mempool pool_data.
+ *   - <0: Error; code of alloc function.
+ */
+int
+rte_mempool_ops_alloc(struct rte_mempool *mp);
+
+/**
+ * @internal Wrapper for mempool_ops get callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to get.
+ * @return
+ *   - 0: Success; got n objects.
+ *   - <0: Error; code of get function.
+ */
+static inline int
+rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
+		void **obj_table, unsigned n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->dequeue(mp, obj_table, n);
+}
+
+/**
+ * @internal wrapper for mempool_ops put callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to put.
+ * @return
+ *   - 0: Success; n objects supplied.
+ *   - <0: Error; code of put function.
+ */
+static inline int
+rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->enqueue(mp, obj_table, n);
+}
+
+/**
+ * @internal wrapper for mempool_ops get_count callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   The number of available objects in the external pool.
+ */
+unsigned
+rte_mempool_ops_get_count(const struct rte_mempool *mp);
+
+/**
+ * @internal wrapper for mempool_ops free callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ */
+void
+rte_mempool_ops_free(struct rte_mempool *mp);
+
+/**
+ * Set the ops of a mempool.
+ *
+ * This can only be done on a mempool that is not populated, i.e. just after
+ * a call to rte_mempool_create_empty().
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param name
+ *   Name of the ops structure to use for this mempool.
+ * @return
+ *   - 0: Success; the mempool is now using the requested ops functions.
+ *   - -EINVAL - Invalid ops struct name provided.
+ *   - -EEXIST - mempool already has an ops struct assigned.
+ */
+int
+rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name);
+
+/**
+ * Register mempool operations.
+ *
+ * @param ops
+ *   Pointer to an ops structure to register.
+ * @return
+ *   - >=0: Success; return the index of the ops struct in the table.
+ *   - -EINVAL - some missing callbacks while registering ops struct.
+ *   - -ENOSPC - the maximum number of ops structs has been reached.
+ */
+int rte_mempool_ops_register(const struct rte_mempool_ops *ops);
+
+/**
+ * Macro to statically register the ops of an external mempool manager.
+ * Note that the rte_mempool_ops_register fails silently here when
+ * more then RTE_MEMPOOL_MAX_OPS_IDX is registered.
+ */
+#define MEMPOOL_REGISTER_OPS(ops)					\
+	void mp_hdlr_init_##ops(void);					\
+	void __attribute__((constructor, used)) mp_hdlr_init_##ops(void)\
+	{								\
+		rte_mempool_ops_register(&ops);			\
+	}
+
 /**
  * An object callback function for mempool.
  *
@@ -774,7 +992,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 	cache->len += n;
 
 	if (cache->len >= flushthresh) {
-		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
+		rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache_size],
 				cache->len - cache_size);
 		cache->len = cache_size;
 	}
@@ -785,19 +1003,10 @@ ring_enqueue:
 
 	/* push remaining objects in ring */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	if (is_mp) {
-		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
-	else {
-		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
+	if (rte_mempool_ops_enqueue_bulk(mp, obj_table, n) < 0)
+		rte_panic("cannot put objects in mempool\n");
 #else
-	if (is_mp)
-		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
-	else
-		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
+	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);
 #endif
 }
 
@@ -945,7 +1154,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 		uint32_t req = n + (cache_size - cache->len);
 
 		/* How many do we require i.e. number to fill the cache + the request */
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
+		ret = rte_mempool_ops_dequeue_bulk(mp,
+			&cache->objs[cache->len], req);
 		if (unlikely(ret < 0)) {
 			/*
 			 * In the offchance that we are buffer constrained,
@@ -972,10 +1182,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 ring_dequeue:
 
 	/* get remaining objects from ring */
-	if (is_mc)
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
-	else
-		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
+	ret = rte_mempool_ops_dequeue_bulk(mp, obj_table, n);
 
 	if (ret < 0)
 		__MEMPOOL_STAT_ADD(mp, get_fail, n);
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
new file mode 100644
index 0000000..9328b77
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -0,0 +1,148 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2016 6WIND S.A.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_mempool.h>
+#include <rte_errno.h>
+
+/* indirect jump table to support external memory pools. */
+struct rte_mempool_ops_table rte_mempool_ops_table = {
+	.sl =  RTE_SPINLOCK_INITIALIZER,
+	.num_ops = 0
+};
+
+/* add a new ops struct in rte_mempool_ops_table, return its index. */
+int
+rte_mempool_ops_register(const struct rte_mempool_ops *h)
+{
+	struct rte_mempool_ops *ops;
+	int16_t ops_index;
+
+	rte_spinlock_lock(&rte_mempool_ops_table.sl);
+
+	if (rte_mempool_ops_table.num_ops >=
+			RTE_MEMPOOL_MAX_OPS_IDX) {
+		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Maximum number of mempool ops structs exceeded\n");
+		return -ENOSPC;
+	}
+
+	if (h->alloc == NULL || h->enqueue == NULL ||
+			h->dequeue == NULL || h->get_count == NULL) {
+		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Missing callback while registering mempool ops\n");
+		return -EINVAL;
+	}
+
+	if (strlen(h->name) >= sizeof(ops->name) - 1) {
+		RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n",
+				__func__, h->name);
+		rte_errno = EEXIST;
+		return -EEXIST;
+	}
+
+	ops_index = rte_mempool_ops_table.num_ops++;
+	ops = &rte_mempool_ops_table.ops[ops_index];
+	snprintf(ops->name, sizeof(ops->name), "%s", h->name);
+	ops->alloc = h->alloc;
+	ops->enqueue = h->enqueue;
+	ops->dequeue = h->dequeue;
+	ops->get_count = h->get_count;
+
+	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+
+	return ops_index;
+}
+
+/* wrapper to allocate an external mempool's private (pool) data. */
+int
+rte_mempool_ops_alloc(struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->alloc(mp);
+}
+
+/* wrapper to free an external pool ops. */
+void
+rte_mempool_ops_free(struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	if (ops->free == NULL)
+		return;
+	return ops->free(mp);
+}
+
+/* wrapper to get available objects in an external mempool. */
+unsigned int
+rte_mempool_ops_get_count(const struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->get_count(mp);
+}
+
+/* sets mempool ops previously registered by rte_mempool_ops_register. */
+int
+rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)
+{
+	struct rte_mempool_ops *ops = NULL;
+	unsigned i;
+
+	/* too late, the mempool is already populated. */
+	if (mp->flags & MEMPOOL_F_POOL_CREATED)
+		return -EEXIST;
+
+	for (i = 0; i < rte_mempool_ops_table.num_ops; i++) {
+		if (!strcmp(name,
+				rte_mempool_ops_table.ops[i].name)) {
+			ops = &rte_mempool_ops_table.ops[i];
+			break;
+		}
+	}
+
+	if (ops == NULL)
+		return -EINVAL;
+
+	mp->ops_index = i;
+	return 0;
+}
diff --git a/lib/librte_mempool/rte_mempool_ring.c b/lib/librte_mempool/rte_mempool_ring.c
new file mode 100644
index 0000000..626786e
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_ring.c
@@ -0,0 +1,161 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+
+static int
+common_ring_mp_enqueue(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	return rte_ring_mp_enqueue_bulk(mp->pool_data, obj_table, n);
+}
+
+static int
+common_ring_sp_enqueue(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	return rte_ring_sp_enqueue_bulk(mp->pool_data, obj_table, n);
+}
+
+static int
+common_ring_mc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	return rte_ring_mc_dequeue_bulk(mp->pool_data, obj_table, n);
+}
+
+static int
+common_ring_sc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	return rte_ring_sc_dequeue_bulk(mp->pool_data, obj_table, n);
+}
+
+static unsigned
+common_ring_get_count(const struct rte_mempool *mp)
+{
+	return rte_ring_count(mp->pool_data);
+}
+
+
+static int
+common_ring_alloc(struct rte_mempool *mp)
+{
+	int rg_flags = 0, ret;
+	char rg_name[RTE_RING_NAMESIZE];
+	struct rte_ring *r;
+
+	ret = snprintf(rg_name, sizeof(rg_name),
+		RTE_MEMPOOL_MZ_FORMAT, mp->name);
+	if (ret < 0 || ret >= (int)sizeof(rg_name)) {
+		rte_errno = ENAMETOOLONG;
+		return -rte_errno;
+	}
+
+	/* ring flags */
+	if (mp->flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (mp->flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	/*
+	 * Allocate the ring that will be used to store objects.
+	 * Ring functions will return appropriate errors if we are
+	 * running as a secondary process etc., so no checks made
+	 * in this function for that condition.
+	 */
+	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+		mp->socket_id, rg_flags);
+	if (r == NULL)
+		return -rte_errno;
+
+	mp->pool_data = r;
+
+	return 0;
+}
+
+static void
+common_ring_free(struct rte_mempool *mp)
+{
+	rte_ring_free(mp->pool_data);
+}
+
+/*
+ * The following 4 declarations of mempool ops structs address
+ * the need for the backward compatible mempool managers for
+ * single/multi producers and single/multi consumers as dictated by the
+ * flags provided to the rte_mempool_create function
+ */
+static const struct rte_mempool_ops ops_mp_mc = {
+	.name = "ring_mp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_mp_enqueue,
+	.dequeue = common_ring_mc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_sp_sc = {
+	.name = "ring_sp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_sp_enqueue,
+	.dequeue = common_ring_sc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_mp_sc = {
+	.name = "ring_mp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_mp_enqueue,
+	.dequeue = common_ring_sc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_sp_mc = {
+	.name = "ring_sp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_sp_enqueue,
+	.dequeue = common_ring_mc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(ops_mp_mc);
+MEMPOOL_REGISTER_OPS(ops_sp_sc);
+MEMPOOL_REGISTER_OPS(ops_mp_sc);
+MEMPOOL_REGISTER_OPS(ops_sp_mc);
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index f63461b..6209ec2 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -20,15 +20,18 @@ DPDK_16.7 {
 	global:
 
 	rte_mempool_check_cookies;
-	rte_mempool_obj_iter;
-	rte_mempool_mem_iter;
 	rte_mempool_create_empty;
+	rte_mempool_free;
+	rte_mempool_mem_iter;
+	rte_mempool_obj_iter;
+	rte_mempool_ops_register;
+	rte_mempool_ops_table;
+	rte_mempool_populate_anon;
+	rte_mempool_populate_default;
 	rte_mempool_populate_phys;
 	rte_mempool_populate_phys_tab;
 	rte_mempool_populate_virt;
-	rte_mempool_populate_default;
-	rte_mempool_populate_anon;
-	rte_mempool_free;
+	rte_mempool_set_ops_byname;
 
 	local: *;
 } DPDK_2.0;
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v12 2/3] app/test: test external mempool manager
  2016-06-15  7:47                     ` [PATCH v12 0/3] mempool: add external mempool manager David Hunt
  2016-06-15  7:47                       ` [PATCH v12 1/3] mempool: support external mempool operations David Hunt
@ 2016-06-15  7:47                       ` David Hunt
  2016-06-15  7:47                       ` [PATCH v12 3/3] mbuf: make default mempool ops configurable at build David Hunt
                                         ` (2 subsequent siblings)
  4 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-06-15  7:47 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain, David Hunt

Use a minimal custom mempool external ops and check that it also
passes basic mempool autotests.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test/test_mempool.c | 122 +++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 120 insertions(+), 2 deletions(-)

diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index b586249..bcf379b 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -83,6 +83,99 @@
 static rte_atomic32_t synchro;
 
 /*
+ * Simple example of custom mempool structure. Holds pointers to all the
+ * elements which are simply malloc'd in this example.
+ */
+struct custom_mempool {
+	rte_spinlock_t lock;
+	unsigned count;
+	unsigned size;
+	void *elts[];
+};
+
+/*
+ * Loop through all the element pointers and allocate a chunk of memory, then
+ * insert that memory into the ring.
+ */
+static int
+custom_mempool_alloc(struct rte_mempool *mp)
+{
+	struct custom_mempool *cm;
+
+	cm = rte_zmalloc("custom_mempool",
+		sizeof(struct custom_mempool) + mp->size * sizeof(void *), 0);
+	if (cm == NULL)
+		return -ENOMEM;
+
+	rte_spinlock_init(&cm->lock);
+	cm->count = 0;
+	cm->size = mp->size;
+	mp->pool_data = cm;
+	return 0;
+}
+
+static void
+custom_mempool_free(struct rte_mempool *mp)
+{
+	rte_free((void *)(mp->pool_data));
+}
+
+static int
+custom_mempool_enqueue(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (cm->count + n > cm->size) {
+		ret = -ENOBUFS;
+	} else {
+		memcpy(&cm->elts[cm->count], obj_table, sizeof(void *) * n);
+		cm->count += n;
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+
+static int
+custom_mempool_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (n > cm->count) {
+		ret = -ENOENT;
+	} else {
+		cm->count -= n;
+		memcpy(obj_table, &cm->elts[cm->count], sizeof(void *) * n);
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+static unsigned
+custom_mempool_get_count(const struct rte_mempool *mp)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+
+	return cm->count;
+}
+
+static struct rte_mempool_ops mempool_ops_custom = {
+	.name = "custom_handler",
+	.alloc = custom_mempool_alloc,
+	.free = custom_mempool_free,
+	.enqueue = custom_mempool_enqueue,
+	.dequeue = custom_mempool_dequeue,
+	.get_count = custom_mempool_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(mempool_ops_custom);
+
+/*
  * save the object number in the first 4 bytes of object data. All
  * other bytes are set to 0.
  */
@@ -292,12 +385,14 @@ static int test_mempool_single_consumer(void)
  * test function for mempool test based on singple consumer and single producer,
  * can run on one lcore only
  */
-static int test_mempool_launch_single_consumer(__attribute__((unused)) void *arg)
+static int
+test_mempool_launch_single_consumer(__attribute__((unused)) void *arg)
 {
 	return test_mempool_single_consumer();
 }
 
-static void my_mp_init(struct rte_mempool * mp, __attribute__((unused)) void * arg)
+static void
+my_mp_init(struct rte_mempool *mp, __attribute__((unused)) void *arg)
 {
 	printf("mempool name is %s\n", mp->name);
 	/* nothing to be implemented here*/
@@ -477,6 +572,7 @@ test_mempool(void)
 {
 	struct rte_mempool *mp_cache = NULL;
 	struct rte_mempool *mp_nocache = NULL;
+	struct rte_mempool *mp_ext = NULL;
 
 	rte_atomic32_init(&synchro);
 
@@ -505,6 +601,27 @@ test_mempool(void)
 		goto err;
 	}
 
+	/* create a mempool with an external handler */
+	mp_ext = rte_mempool_create_empty("test_ext",
+		MEMPOOL_SIZE,
+		MEMPOOL_ELT_SIZE,
+		RTE_MEMPOOL_CACHE_MAX_SIZE, 0,
+		SOCKET_ID_ANY, 0);
+
+	if (mp_ext == NULL) {
+		printf("cannot allocate mp_ext mempool\n");
+		goto err;
+	}
+	if (rte_mempool_set_ops_byname(mp_ext, "custom_handler") < 0) {
+		printf("cannot set custom handler\n");
+		goto err;
+	}
+	if (rte_mempool_populate_default(mp_ext) < 0) {
+		printf("cannot populate mp_ext mempool\n");
+		goto err;
+	}
+	rte_mempool_obj_iter(mp_ext, my_obj_init, NULL);
+
 	/* retrieve the mempool from its name */
 	if (rte_mempool_lookup("test_nocache") != mp_nocache) {
 		printf("Cannot lookup mempool from its name\n");
@@ -545,6 +662,7 @@ test_mempool(void)
 err:
 	rte_mempool_free(mp_nocache);
 	rte_mempool_free(mp_cache);
+	rte_mempool_free(mp_ext);
 	return -1;
 }
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v12 3/3] mbuf: make default mempool ops configurable at build
  2016-06-15  7:47                     ` [PATCH v12 0/3] mempool: add external mempool manager David Hunt
  2016-06-15  7:47                       ` [PATCH v12 1/3] mempool: support external mempool operations David Hunt
  2016-06-15  7:47                       ` [PATCH v12 2/3] app/test: test external mempool manager David Hunt
@ 2016-06-15  7:47                       ` David Hunt
  2016-06-15 10:13                       ` [PATCH v12 0/3] mempool: add external mempool manager Jan Viktorin
  2016-06-16 12:30                       ` [PATCH v13 " David Hunt
  4 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-06-15  7:47 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain, David Hunt

By default, the mempool ops used for mbuf allocations is a multi
producer and multi consumer ring. We could imagine a target (maybe some
network processors?) that provides an hardware-assisted pool
mechanism. In this case, the default configuration for this architecture
would contain a different value for RTE_MBUF_DEFAULT_MEMPOOL_OPS.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 config/common_base         |  1 +
 lib/librte_mbuf/rte_mbuf.c | 26 ++++++++++++++++++++++----
 2 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/config/common_base b/config/common_base
index b9ba405..13ad4dd 100644
--- a/config/common_base
+++ b/config/common_base
@@ -394,6 +394,7 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
 #
 CONFIG_RTE_LIBRTE_MBUF=y
 CONFIG_RTE_LIBRTE_MBUF_DEBUG=n
+CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="ring_mp_mc"
 CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index eec1456..491230c 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -153,6 +153,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	unsigned cache_size, uint16_t priv_size, uint16_t data_room_size,
 	int socket_id)
 {
+	struct rte_mempool *mp;
 	struct rte_pktmbuf_pool_private mbp_priv;
 	unsigned elt_size;
 
@@ -167,10 +168,27 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	mbp_priv.mbuf_data_room_size = data_room_size;
 	mbp_priv.mbuf_priv_size = priv_size;
 
-	return rte_mempool_create(name, n, elt_size,
-		cache_size, sizeof(struct rte_pktmbuf_pool_private),
-		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
-		socket_id, 0);
+	mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
+		 sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
+	if (mp == NULL)
+		return NULL;
+
+	rte_errno = rte_mempool_set_ops_byname(mp,
+			RTE_MBUF_DEFAULT_MEMPOOL_OPS);
+	if (rte_errno != 0) {
+		RTE_LOG(ERR, MBUF, "error setting mempool handler\n");
+		return NULL;
+	}
+	rte_pktmbuf_pool_init(mp, &mbp_priv);
+
+	if (rte_mempool_populate_default(mp) < 0) {
+		rte_mempool_free(mp);
+		return NULL;
+	}
+
+	rte_mempool_obj_iter(mp, rte_pktmbuf_init, NULL);
+
+	return mp;
 }
 
 /* do some sanity checks on a mbuf: panic if it fails */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* Re: [PATCH v12 0/3] mempool: add external mempool manager
  2016-06-15  7:47                     ` [PATCH v12 0/3] mempool: add external mempool manager David Hunt
                                         ` (2 preceding siblings ...)
  2016-06-15  7:47                       ` [PATCH v12 3/3] mbuf: make default mempool ops configurable at build David Hunt
@ 2016-06-15 10:13                       ` Jan Viktorin
  2016-06-15 11:47                         ` Hunt, David
  2016-06-16 12:30                       ` [PATCH v13 " David Hunt
  4 siblings, 1 reply; 238+ messages in thread
From: Jan Viktorin @ 2016-06-15 10:13 UTC (permalink / raw)
  To: David Hunt; +Cc: dev, olivier.matz, jerin.jacob, shreyansh.jain

Hi,

I've got one last question. Initially, I was interested in creating
my own external memory provider based on a Linux Kernel driver.
So, I've got an opened file descriptor that points to a device which
can mmap a memory regions for me.

...
int fd = open("/dev/uio0" ...);
...
rte_mempool *pool = rte_mempool_create_empty(...);
rte_mempool_set_ops_byname(pool, "uio_allocator_ops");

I am not sure how to pass the file descriptor pointer. I thought it would
be possible by the rte_mempool_alloc but it's not... Is it possible
to solve this case?

The allocator is device-specific.

Regards
Jan

On Wed, 15 Jun 2016 08:47:01 +0100
David Hunt <david.hunt@intel.com> wrote:

> Here's the latest version of the External Mempool Manager patchset.
> It's re-based on top of the latest head as of 14/6/2016, including
> Olivier's 35-part patch series on mempool re-org [1]
> 
> [1] http://dpdk.org/ml/archives/dev/2016-May/039229.html
> 
> v12 changes:
> 
>  * Fixed a comment (function pram h -> ops)
>  * Fixed a typo in mempool docs (callbacki)
> 
> v11 changes:
> 
>  * Fixed comments (added '.' where needed for consistency)
>  * removed ABI breakage notice for mempool manager in deprecation.rst
>  * Added description of the external mempool manager functionality to
>    doc/guides/prog_guide/mempool_lib.rst (John Mc reviewed)
>  * renamed rte_mempool_default.c to rte_mempool_ring.c
> 
> v10 changes:
> 
>  * changed the _put/_get op names to _enqueue/_dequeue to be consistent
>    with the function names
>  * some rte_errno cleanup
>  * comment tweaks about when to set pool_data
>  * removed an un-needed check for ops->alloc == NULL
> 
> v9 changes:
> 
>  * added a check for NULL alloc in rte_mempool_ops_register
>  * rte_mempool_alloc_t now returns int instead of void*
>  * fixed some comment typo's
>  * removed some unneeded typecasts
>  * changed a return NULL to return -EEXIST in rte_mempool_ops_register
>  * fixed rte_mempool_version.map file so builds ok as shared libs
>  * moved flags check from rte_mempool_create_empty to rte_mempool_create
> 
> v8 changes:
> 
>  * merged first three patches in the series into one.
>  * changed parameters to ops callback to all be rte_mempool pointer
>    rather than than pointer to opaque data or uint64.
>  * comment fixes.
>  * fixed parameter to _free function (was inconsistent).
>  * changed MEMPOOL_F_RING_CREATED to MEMPOOL_F_POOL_CREATED
> 
> v7 changes:
> 
>  * Changed rte_mempool_handler_table to rte_mempool_ops_table
>  * Changed hander_idx to ops_index in rte_mempool struct
>  * Reworked comments in rte_mempool.h around ops functions
>  * Changed rte_mempool_hander.c to rte_mempool_ops.c
>  * Changed all functions containing _handler_ to _ops_
>  * Now there is no mention of 'handler' left
>  * Other small changes out of review of mailing list
> 
> v6 changes:
> 
>  * Moved the flags handling from rte_mempool_create_empty to
>    rte_mempool_create, as it's only there for backward compatibility
>  * Various comment additions and cleanup
>  * Renamed rte_mempool_handler to rte_mempool_ops
>  * Added a union for *pool and u64 pool_id in struct rte_mempool
>  * split the original patch into a few parts for easier review.
>  * rename functions with _ext_ to _ops_.
>  * addressed review comments
>  * renamed put and get functions to enqueue and dequeue
>  * changed occurences of rte_mempool_ops to const, as they
>    contain function pointers (security)
>  * split out the default external mempool handler into a separate
>    patch for easier review
> 
> v5 changes:
>  * rebasing, as it is dependent on another patch series [1]
> 
> v4 changes (Olivier Matz):
>  * remove the rte_mempool_create_ext() function. To change the handler, the
>    user has to do the following:
>    - mp = rte_mempool_create_empty()
>    - rte_mempool_set_handler(mp, "my_handler")
>    - rte_mempool_populate_default(mp)
>    This avoids to add another function with more than 10 arguments, duplicating
>    the doxygen comments
>  * change the api of rte_mempool_alloc_t: only the mempool pointer is required
>    as all information is available in it
>  * change the api of rte_mempool_free_t: remove return value
>  * move inline wrapper functions from the .c to the .h (else they won't be
>    inlined). This implies to have one header file (rte_mempool.h), or it
>    would have generate cross dependencies issues.
>  * remove now unused MEMPOOL_F_INT_HANDLER (note: it was misused anyway due
>    to the use of && instead of &)
>  * fix build in debug mode (__MEMPOOL_STAT_ADD(mp, put_pool, n) remaining)
>  * fix build with shared libraries (global handler has to be declared in
>    the .map file)
>  * rationalize #include order
>  * remove unused function rte_mempool_get_handler_name()
>  * rename some structures, fields, functions
>  * remove the static in front of rte_tailq_elem rte_mempool_tailq (comment
>    from Yuanhan)
>  * test the ext mempool handler in the same file than standard mempool tests,
>    avoiding to duplicate the code
>  * rework the custom handler in mempool_test
>  * rework a bit the patch selecting default mbuf pool handler
>  * fix some doxygen comments
> 
> v3 changes:
>  * simplified the file layout, renamed to rte_mempool_handler.[hc]
>  * moved the default handlers into rte_mempool_default.c
>  * moved the example handler out into app/test/test_ext_mempool.c
>  * removed is_mc/is_mp change, slight perf degredation on sp cached operation
>  * removed stack hanler, may re-introduce at a later date
>  * Changes out of code reviews
> 
> v2 changes:
>  * There was a lot of duplicate code between rte_mempool_xmem_create and
>    rte_mempool_create_ext. This has now been refactored and is now
>    hopefully cleaner.
>  * The RTE_NEXT_ABI define is now used to allow building of the library
>    in a format that is compatible with binaries built against previous
>    versions of DPDK.
>  * Changes out of code reviews. Hopefully I've got most of them included.
> 
> The External Mempool Manager is an extension to the mempool API that allows
> users to add and use an external mempool manager, which allows external memory
> subsystems such as external hardware memory management systems and software
> based memory allocators to be used with DPDK.
> 
> The existing API to the internal DPDK mempool manager will remain unchanged
> and will be backward compatible. However, there will be an ABI breakage, as
> the mempool struct is changing. These changes are all contained withing
> RTE_NEXT_ABI defs, and the current or next code can be changed with
> the CONFIG_RTE_NEXT_ABI config setting
> 
> There are two aspects to external mempool manager.
>   1. Adding the code for your new mempool operations (ops). This is
>      achieved by adding a new mempool ops source file into the
>      librte_mempool library, and using the REGISTER_MEMPOOL_OPS macro.
>   2. Using the new API to call rte_mempool_create_empty and
>      rte_mempool_set_ops_byname to create a new mempool
>      using the name parameter to identify which ops to use.
> 
> New API calls added
>  1. A new rte_mempool_create_empty() function
>  2. rte_mempool_set_ops_byname() which sets the mempool's ops (functions)
>  3. An rte_mempool_populate_default() and rte_mempool_populate_anon() functions
>     which populates the mempool using the relevant ops
> 
> Several external mempool managers may be used in the same application. A new
> mempool can then be created by using the new rte_mempool_create_empty function,
> then calling rte_mempool_set_ops_byname to point the mempool to the relevant
> mempool manager callback structure.
> 
> Legacy applications will continue to use the old rte_mempool_create API call,
> which uses a ring based mempool manager by default. These applications
> will need to be modified to use a new external mempool manager.
> 
> The external mempool manager needs to provide the following functions.
>  1. alloc     - allocates the mempool memory, and adds each object onto a ring
>  2. enqueue   - puts an object back into the mempool once an application has
>                 finished with it
>  3. dequeue   - gets an object from the mempool for use by the application
>  4. get_count - gets the number of available objects in the mempool
>  5. free      - frees the mempool memory
> 
> Every time an enqueue/dequeue/get_count is called from the application/PMD,
> the callback for that mempool is called. These functions are in the fastpath,
> and any unoptimised ops may limit performance.
> 
> The new APIs are as follows:
> 
> 1. rte_mempool_create_empty
> 
> struct rte_mempool *
> rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
>     unsigned cache_size, unsigned private_data_size,
>     int socket_id, unsigned flags);
> 
> 2. rte_mempool_set_ops_byname()
> 
> int
> rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name);
> 
> 3. rte_mempool_populate_default()
> 
> int rte_mempool_populate_default(struct rte_mempool *mp);
> 
> 4. rte_mempool_populate_anon()
> 
> int rte_mempool_populate_anon(struct rte_mempool *mp);
> 
> Please see rte_mempool.h for further information on the parameters.
> 
> 
> The important thing to note is that the mempool ops struct is passed by name
> to rte_mempool_set_ops_byname, which looks through the ops struct array to
> get the ops_index, which is then stored in the rte_memool structure. This
> allow multiple processes to use the same mempool, as the function pointers
> are accessed via ops index.
> 
> The mempool ops structure contains callbacks to the implementation of
> the ops function, and is set up for registration as follows:
> 
> static const struct rte_mempool_ops ops_sp_mc = {
>     .name = "ring_sp_mc",
>     .alloc = rte_mempool_common_ring_alloc,
>     .enqueue = common_ring_sp_enqueue,
>     .dequeue = common_ring_mc_dequeue,
>     .get_count = common_ring_get_count,
>     .free = common_ring_free,
> };
> 
> And then the following macro will register the ops in the array of ops
> structures
> 
> REGISTER_MEMPOOL_OPS(ops_mp_mc);
> 
> For an example of API usage, please see app/test/test_mempool.c, which
> implements a rudimentary "custom_handler" mempool manager using simple mallocs
> for each mempool object. This file also contains the callbacks and self
> registration for the new handler.
> 
> David Hunt (2):
>   mempool: support external mempool operations
>   mbuf: make default mempool ops configurable at build
> 
> Olivier Matz (1):
>   app/test: test external mempool manager
> 
> 



-- 
   Jan Viktorin                  E-mail: Viktorin@RehiveTech.com
   System Architect              Web:    www.RehiveTech.com
   RehiveTech
   Brno, Czech Republic

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v12 1/3] mempool: support external mempool operations
  2016-06-15  7:47                       ` [PATCH v12 1/3] mempool: support external mempool operations David Hunt
@ 2016-06-15 10:14                         ` Jan Viktorin
  2016-06-15 10:29                           ` Hunt, David
  0 siblings, 1 reply; 238+ messages in thread
From: Jan Viktorin @ 2016-06-15 10:14 UTC (permalink / raw)
  To: David Hunt; +Cc: dev, olivier.matz, jerin.jacob, shreyansh.jain

On Wed, 15 Jun 2016 08:47:02 +0100
David Hunt <david.hunt@intel.com> wrote:

> Until now, the objects stored in a mempool were internally stored in a
> ring. This patch introduces the possibility to register external handlers
> replacing the ring.
> 
> The default behavior remains unchanged, but calling the new function
> rte_mempool_set_ops_byname() right after rte_mempool_create_empty() allows
> the user to change the handler that will be used when populating
> the mempool.
> 
> This patch also adds a set of default ops (function callbacks) based
> on rte_ring.
> 
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: David Hunt <david.hunt@intel.com>
> Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> Acked-by: Olivier Matz <olivier.matz@6wind.com>
> ---
>  app/test/test_mempool_perf.c               |   1 -
>  doc/guides/prog_guide/mempool_lib.rst      |  31 +++-
>  doc/guides/rel_notes/deprecation.rst       |   9 --
>  lib/librte_mempool/Makefile                |   2 +
>  lib/librte_mempool/rte_mempool.c           |  66 +++-----
>  lib/librte_mempool/rte_mempool.h           | 251 ++++++++++++++++++++++++++---
>  lib/librte_mempool/rte_mempool_ops.c       | 148 +++++++++++++++++
>  lib/librte_mempool/rte_mempool_ring.c      | 161 ++++++++++++++++++
>  lib/librte_mempool/rte_mempool_version.map |  13 +-
>  9 files changed, 601 insertions(+), 81 deletions(-)
>  create mode 100644 lib/librte_mempool/rte_mempool_ops.c
>  create mode 100644 lib/librte_mempool/rte_mempool_ring.c
> 

[...]

> +
> +/** Array of registered ops structs. */
> +extern struct rte_mempool_ops_table rte_mempool_ops_table;
> +
> +/**
> + * @internal Get the mempool ops struct from its index.
> + *
> + * @param ops_index
> + *   The index of the ops struct in the ops struct table. It must be a valid
> + *   index: (0 <= idx < num_ops).
> + * @return
> + *   The pointer to the ops struct in the table.
> + */
> +static inline struct rte_mempool_ops *
> +rte_mempool_ops_get(int ops_index)

Shouldn't this function be called rte_mempool_get/find_ops instead?

Jan

> +{
> +	RTE_VERIFY(ops_index < RTE_MEMPOOL_MAX_OPS_IDX);
> +
> +	return &rte_mempool_ops_table.ops[ops_index];
> +}
> +

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v12 1/3] mempool: support external mempool operations
  2016-06-15 10:14                         ` Jan Viktorin
@ 2016-06-15 10:29                           ` Hunt, David
  2016-06-15 11:26                             ` Jan Viktorin
  2016-06-15 11:38                             ` Thomas Monjalon
  0 siblings, 2 replies; 238+ messages in thread
From: Hunt, David @ 2016-06-15 10:29 UTC (permalink / raw)
  To: Jan Viktorin; +Cc: dev, olivier.matz, jerin.jacob, shreyansh.jain



On 15/6/2016 11:14 AM, Jan Viktorin wrote:
> On Wed, 15 Jun 2016 08:47:02 +0100
> David Hunt <david.hunt@intel.com> wrote:
>

[...]

>
>> +
>> +/** Array of registered ops structs. */
>> +extern struct rte_mempool_ops_table rte_mempool_ops_table;
>> +
>> +/**
>> + * @internal Get the mempool ops struct from its index.
>> + *
>> + * @param ops_index
>> + *   The index of the ops struct in the ops struct table. It must be a valid
>> + *   index: (0 <= idx < num_ops).
>> + * @return
>> + *   The pointer to the ops struct in the table.
>> + */
>> +static inline struct rte_mempool_ops *
>> +rte_mempool_ops_get(int ops_index)
> Shouldn't this function be called rte_mempool_get/find_ops instead?
>
>

Jan,

    I think at this stage that it's probably OK as it is.  :)

Rgds,
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v12 1/3] mempool: support external mempool operations
  2016-06-15 10:29                           ` Hunt, David
@ 2016-06-15 11:26                             ` Jan Viktorin
  2016-06-15 11:38                             ` Thomas Monjalon
  1 sibling, 0 replies; 238+ messages in thread
From: Jan Viktorin @ 2016-06-15 11:26 UTC (permalink / raw)
  To: Hunt, David; +Cc: dev, olivier.matz, jerin.jacob, shreyansh.jain

On Wed, 15 Jun 2016 11:29:51 +0100
"Hunt, David" <david.hunt@intel.com> wrote:

> On 15/6/2016 11:14 AM, Jan Viktorin wrote:
> > On Wed, 15 Jun 2016 08:47:02 +0100
> > David Hunt <david.hunt@intel.com> wrote:
> >  
> 
> [...]
> 
> >  
> >> +
> >> +/** Array of registered ops structs. */
> >> +extern struct rte_mempool_ops_table rte_mempool_ops_table;
> >> +
> >> +/**
> >> + * @internal Get the mempool ops struct from its index.
> >> + *
> >> + * @param ops_index
> >> + *   The index of the ops struct in the ops struct table. It must be a valid
> >> + *   index: (0 <= idx < num_ops).
> >> + * @return
> >> + *   The pointer to the ops struct in the table.
> >> + */
> >> +static inline struct rte_mempool_ops *
> >> +rte_mempool_ops_get(int ops_index)  
> > Shouldn't this function be called rte_mempool_get/find_ops instead?
> >
> >  
> 
> Jan,
> 
>     I think at this stage that it's probably OK as it is.  :)

Ok. I just remember some discussion about this. I didn't follow
the thread during last days so I wanted to be sure that it's not
forgotten.

Jan

> 
> Rgds,
> Dave.
> 
> 
> 
> 



-- 
   Jan Viktorin                  E-mail: Viktorin@RehiveTech.com
   System Architect              Web:    www.RehiveTech.com
   RehiveTech
   Brno, Czech Republic

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v12 1/3] mempool: support external mempool operations
  2016-06-15 10:29                           ` Hunt, David
  2016-06-15 11:26                             ` Jan Viktorin
@ 2016-06-15 11:38                             ` Thomas Monjalon
  1 sibling, 0 replies; 238+ messages in thread
From: Thomas Monjalon @ 2016-06-15 11:38 UTC (permalink / raw)
  To: Hunt, David; +Cc: dev, Jan Viktorin, olivier.matz, jerin.jacob, shreyansh.jain

2016-06-15 11:29, Hunt, David:
> 
> On 15/6/2016 11:14 AM, Jan Viktorin wrote:
> > On Wed, 15 Jun 2016 08:47:02 +0100
> > David Hunt <david.hunt@intel.com> wrote:
> >
> 
> [...]
> 
> >
> >> +
> >> +/** Array of registered ops structs. */
> >> +extern struct rte_mempool_ops_table rte_mempool_ops_table;
> >> +
> >> +/**
> >> + * @internal Get the mempool ops struct from its index.
> >> + *
> >> + * @param ops_index
> >> + *   The index of the ops struct in the ops struct table. It must be a valid
> >> + *   index: (0 <= idx < num_ops).
> >> + * @return
> >> + *   The pointer to the ops struct in the table.
> >> + */
> >> +static inline struct rte_mempool_ops *
> >> +rte_mempool_ops_get(int ops_index)
> > Shouldn't this function be called rte_mempool_get/find_ops instead?
> 
> Jan,
> 
>     I think at this stage that it's probably OK as it is.  :)

?
What is the justification?

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v12 0/3] mempool: add external mempool manager
  2016-06-15 10:13                       ` [PATCH v12 0/3] mempool: add external mempool manager Jan Viktorin
@ 2016-06-15 11:47                         ` Hunt, David
  2016-06-15 12:03                           ` Olivier MATZ
  0 siblings, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-06-15 11:47 UTC (permalink / raw)
  To: Jan Viktorin; +Cc: dev, olivier.matz, jerin.jacob, shreyansh.jain



On 15/6/2016 11:13 AM, Jan Viktorin wrote:
> Hi,
>
> I've got one last question. Initially, I was interested in creating
> my own external memory provider based on a Linux Kernel driver.
> So, I've got an opened file descriptor that points to a device which
> can mmap a memory regions for me.
>
> ...
> int fd = open("/dev/uio0" ...);
> ...
> rte_mempool *pool = rte_mempool_create_empty(...);
> rte_mempool_set_ops_byname(pool, "uio_allocator_ops");
>
> I am not sure how to pass the file descriptor pointer. I thought it would
> be possible by the rte_mempool_alloc but it's not... Is it possible
> to solve this case?
>
> The allocator is device-specific.
>
> Regards
> Jan

This particular use case is not covered.

We did discuss this before, and an opaque pointer was proposed, but did 
not make it in.
http://article.gmane.org/gmane.comp.networking.dpdk.devel/39821
(and following emails in that thread)

So, the options for this use case are as follows:
1. Use the pool_data to pass data in to the alloc, then set the 
pool_data pointer before coming back from alloc. (It's a bit of a hack, 
but means no code change).
2. Add an extra parameter to the alloc function. The simplest way I can 
think of doing this is to
take the *opaque passed into rte_mempool_populate_phys, and pass it on 
into the alloc function.
This will have minimal impact on the public API,s as there is already an 
opaque there in the _populate_ funcs, we're just
reusing it for the alloc.

Do others think option 2 is OK to add this at this late stage? Even if 
the patch set has already been ACK'd?

Regards,
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v12 0/3] mempool: add external mempool manager
  2016-06-15 11:47                         ` Hunt, David
@ 2016-06-15 12:03                           ` Olivier MATZ
  2016-06-15 12:38                             ` Hunt, David
  2016-06-15 16:34                             ` Hunt, David
  0 siblings, 2 replies; 238+ messages in thread
From: Olivier MATZ @ 2016-06-15 12:03 UTC (permalink / raw)
  To: Hunt, David, Jan Viktorin; +Cc: dev, jerin.jacob, shreyansh.jain

Hi,

On 06/15/2016 01:47 PM, Hunt, David wrote:
>
>
> On 15/6/2016 11:13 AM, Jan Viktorin wrote:
>> Hi,
>>
>> I've got one last question. Initially, I was interested in creating
>> my own external memory provider based on a Linux Kernel driver.
>> So, I've got an opened file descriptor that points to a device which
>> can mmap a memory regions for me.
>>
>> ...
>> int fd = open("/dev/uio0" ...);
>> ...
>> rte_mempool *pool = rte_mempool_create_empty(...);
>> rte_mempool_set_ops_byname(pool, "uio_allocator_ops");
>>
>> I am not sure how to pass the file descriptor pointer. I thought it would
>> be possible by the rte_mempool_alloc but it's not... Is it possible
>> to solve this case?
>>
>> The allocator is device-specific.
>>
>> Regards
>> Jan
>
> This particular use case is not covered.
>
> We did discuss this before, and an opaque pointer was proposed, but did
> not make it in.
> http://article.gmane.org/gmane.comp.networking.dpdk.devel/39821
> (and following emails in that thread)
>
> So, the options for this use case are as follows:
> 1. Use the pool_data to pass data in to the alloc, then set the
> pool_data pointer before coming back from alloc. (It's a bit of a hack,
> but means no code change).
> 2. Add an extra parameter to the alloc function. The simplest way I can
> think of doing this is to
> take the *opaque passed into rte_mempool_populate_phys, and pass it on
> into the alloc function.
> This will have minimal impact on the public API,s as there is already an
> opaque there in the _populate_ funcs, we're just
> reusing it for the alloc.
>
> Do others think option 2 is OK to add this at this late stage? Even if
> the patch set has already been ACK'd?

Jan's use-case looks to be relevant.

What about changing:

   rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)

into:

  rte_mempool_set_ops(struct rte_mempool *mp, const char *name,
     void *opaque)

?

The opaque pointer would be saved in mempool structure, and used
when the mempool is populated (calling mempool_ops_alloc).
The type of the structure pointed by the opaque has to be defined
(and documented) into each mempool_ops manager.


Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v12 0/3] mempool: add external mempool manager
  2016-06-15 12:03                           ` Olivier MATZ
@ 2016-06-15 12:38                             ` Hunt, David
  2016-06-15 13:50                               ` Olivier MATZ
  2016-06-15 16:34                             ` Hunt, David
  1 sibling, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-06-15 12:38 UTC (permalink / raw)
  To: Olivier MATZ, Jan Viktorin; +Cc: dev, jerin.jacob, shreyansh.jain



On 15/6/2016 1:03 PM, Olivier MATZ wrote:
> Hi,
>
> On 06/15/2016 01:47 PM, Hunt, David wrote:
>>
>>
>> On 15/6/2016 11:13 AM, Jan Viktorin wrote:
>>> Hi,
>>>
>>> I've got one last question. Initially, I was interested in creating
>>> my own external memory provider based on a Linux Kernel driver.
>>> So, I've got an opened file descriptor that points to a device which
>>> can mmap a memory regions for me.
>>>
>>> ...
>>> int fd = open("/dev/uio0" ...);
>>> ...
>>> rte_mempool *pool = rte_mempool_create_empty(...);
>>> rte_mempool_set_ops_byname(pool, "uio_allocator_ops");
>>>
>>> I am not sure how to pass the file descriptor pointer. I thought it 
>>> would
>>> be possible by the rte_mempool_alloc but it's not... Is it possible
>>> to solve this case?
>>>
>>> The allocator is device-specific.
>>>
>>> Regards
>>> Jan
>>
>> This particular use case is not covered.
>>
>> We did discuss this before, and an opaque pointer was proposed, but did
>> not make it in.
>> http://article.gmane.org/gmane.comp.networking.dpdk.devel/39821
>> (and following emails in that thread)
>>
>> So, the options for this use case are as follows:
>> 1. Use the pool_data to pass data in to the alloc, then set the
>> pool_data pointer before coming back from alloc. (It's a bit of a hack,
>> but means no code change).
>> 2. Add an extra parameter to the alloc function. The simplest way I can
>> think of doing this is to
>> take the *opaque passed into rte_mempool_populate_phys, and pass it on
>> into the alloc function.
>> This will have minimal impact on the public API,s as there is already an
>> opaque there in the _populate_ funcs, we're just
>> reusing it for the alloc.
>>
>> Do others think option 2 is OK to add this at this late stage? Even if
>> the patch set has already been ACK'd?
>
> Jan's use-case looks to be relevant.
>
> What about changing:
>
>   rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)
>
> into:
>
>  rte_mempool_set_ops(struct rte_mempool *mp, const char *name,
>     void *opaque)
>
> ?
>
> The opaque pointer would be saved in mempool structure, and used
> when the mempool is populated (calling mempool_ops_alloc).
> The type of the structure pointed by the opaque has to be defined
> (and documented) into each mempool_ops manager.
>

Yes, that was another option, which has the additional impact of needing an
opaque added to the mempool struct. If we use the opaque from the _populate_
function, we use it straight away in the alloc, no storage needed.

Also, do you think we need to go ahead with this change, or can we add 
it later as an
improvement?

Regards,
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v12 0/3] mempool: add external mempool manager
  2016-06-15 12:38                             ` Hunt, David
@ 2016-06-15 13:50                               ` Olivier MATZ
  2016-06-15 14:02                                 ` Hunt, David
  0 siblings, 1 reply; 238+ messages in thread
From: Olivier MATZ @ 2016-06-15 13:50 UTC (permalink / raw)
  To: Hunt, David, Jan Viktorin; +Cc: dev, jerin.jacob, shreyansh.jain

Hi David,

On 06/15/2016 02:38 PM, Hunt, David wrote:
>
>
> On 15/6/2016 1:03 PM, Olivier MATZ wrote:
>> Hi,
>>
>> On 06/15/2016 01:47 PM, Hunt, David wrote:
>>>
>>>
>>> On 15/6/2016 11:13 AM, Jan Viktorin wrote:
>>>> Hi,
>>>>
>>>> I've got one last question. Initially, I was interested in creating
>>>> my own external memory provider based on a Linux Kernel driver.
>>>> So, I've got an opened file descriptor that points to a device which
>>>> can mmap a memory regions for me.
>>>>
>>>> ...
>>>> int fd = open("/dev/uio0" ...);
>>>> ...
>>>> rte_mempool *pool = rte_mempool_create_empty(...);
>>>> rte_mempool_set_ops_byname(pool, "uio_allocator_ops");
>>>>
>>>> I am not sure how to pass the file descriptor pointer. I thought it
>>>> would
>>>> be possible by the rte_mempool_alloc but it's not... Is it possible
>>>> to solve this case?
>>>>
>>>> The allocator is device-specific.
>>>>
>>>> Regards
>>>> Jan
>>>
>>> This particular use case is not covered.
>>>
>>> We did discuss this before, and an opaque pointer was proposed, but did
>>> not make it in.
>>> http://article.gmane.org/gmane.comp.networking.dpdk.devel/39821
>>> (and following emails in that thread)
>>>
>>> So, the options for this use case are as follows:
>>> 1. Use the pool_data to pass data in to the alloc, then set the
>>> pool_data pointer before coming back from alloc. (It's a bit of a hack,
>>> but means no code change).
>>> 2. Add an extra parameter to the alloc function. The simplest way I can
>>> think of doing this is to
>>> take the *opaque passed into rte_mempool_populate_phys, and pass it on
>>> into the alloc function.
>>> This will have minimal impact on the public API,s as there is already an
>>> opaque there in the _populate_ funcs, we're just
>>> reusing it for the alloc.
>>>
>>> Do others think option 2 is OK to add this at this late stage? Even if
>>> the patch set has already been ACK'd?
>>
>> Jan's use-case looks to be relevant.
>>
>> What about changing:
>>
>>   rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)
>>
>> into:
>>
>>  rte_mempool_set_ops(struct rte_mempool *mp, const char *name,
>>     void *opaque)
>>
>> ?
>>
>> The opaque pointer would be saved in mempool structure, and used
>> when the mempool is populated (calling mempool_ops_alloc).
>> The type of the structure pointed by the opaque has to be defined
>> (and documented) into each mempool_ops manager.
>>
>
> Yes, that was another option, which has the additional impact of needing an
> opaque added to the mempool struct. If we use the opaque from the
> _populate_
> function, we use it straight away in the alloc, no storage needed.
>
> Also, do you think we need to go ahead with this change, or can we add
> it later as an
> improvement?

The opaque in populate_phys() is already used for something else
(i.e. the argument for the free callback of the memory chunk).
I'm afraid it could cause confusion to have it used for 2 different
things.

About the change, I think it could be good to have it in 16.11,
because it will probably change the API, and we should avoid to
change it each version ;)

So I'd vote to have it in the patchset for consistency.


Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v12 0/3] mempool: add external mempool manager
  2016-06-15 13:50                               ` Olivier MATZ
@ 2016-06-15 14:02                                 ` Hunt, David
  2016-06-15 14:10                                   ` Olivier MATZ
  0 siblings, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-06-15 14:02 UTC (permalink / raw)
  To: Olivier MATZ, Jan Viktorin; +Cc: dev, jerin.jacob, shreyansh.jain



On 15/6/2016 2:50 PM, Olivier MATZ wrote:
> Hi David,
>
> On 06/15/2016 02:38 PM, Hunt, David wrote:
>>
>>
>> On 15/6/2016 1:03 PM, Olivier MATZ wrote:
>>> Hi,
>>>
>>> On 06/15/2016 01:47 PM, Hunt, David wrote:
>>>>
>>>>
>>>> On 15/6/2016 11:13 AM, Jan Viktorin wrote:
>>>>> Hi,
>>>>>
>>>>> I've got one last question. Initially, I was interested in creating
>>>>> my own external memory provider based on a Linux Kernel driver.
>>>>> So, I've got an opened file descriptor that points to a device which
>>>>> can mmap a memory regions for me.
>>>>>
>>>>> ...
>>>>> int fd = open("/dev/uio0" ...);
>>>>> ...
>>>>> rte_mempool *pool = rte_mempool_create_empty(...);
>>>>> rte_mempool_set_ops_byname(pool, "uio_allocator_ops");
>>>>>
>>>>> I am not sure how to pass the file descriptor pointer. I thought it
>>>>> would
>>>>> be possible by the rte_mempool_alloc but it's not... Is it possible
>>>>> to solve this case?
>>>>>
>>>>> The allocator is device-specific.
>>>>>
>>>>> Regards
>>>>> Jan
>>>>
>>>> This particular use case is not covered.
>>>>
>>>> We did discuss this before, and an opaque pointer was proposed, but 
>>>> did
>>>> not make it in.
>>>> http://article.gmane.org/gmane.comp.networking.dpdk.devel/39821
>>>> (and following emails in that thread)
>>>>
>>>> So, the options for this use case are as follows:
>>>> 1. Use the pool_data to pass data in to the alloc, then set the
>>>> pool_data pointer before coming back from alloc. (It's a bit of a 
>>>> hack,
>>>> but means no code change).
>>>> 2. Add an extra parameter to the alloc function. The simplest way I 
>>>> can
>>>> think of doing this is to
>>>> take the *opaque passed into rte_mempool_populate_phys, and pass it on
>>>> into the alloc function.
>>>> This will have minimal impact on the public API,s as there is 
>>>> already an
>>>> opaque there in the _populate_ funcs, we're just
>>>> reusing it for the alloc.
>>>>
>>>> Do others think option 2 is OK to add this at this late stage? Even if
>>>> the patch set has already been ACK'd?
>>>
>>> Jan's use-case looks to be relevant.
>>>
>>> What about changing:
>>>
>>>   rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)
>>>
>>> into:
>>>
>>>  rte_mempool_set_ops(struct rte_mempool *mp, const char *name,
>>>     void *opaque)
>>>
>>> ?
>>>
>>> The opaque pointer would be saved in mempool structure, and used
>>> when the mempool is populated (calling mempool_ops_alloc).
>>> The type of the structure pointed by the opaque has to be defined
>>> (and documented) into each mempool_ops manager.
>>>
>>
>> Yes, that was another option, which has the additional impact of 
>> needing an
>> opaque added to the mempool struct. If we use the opaque from the
>> _populate_
>> function, we use it straight away in the alloc, no storage needed.
>>
>> Also, do you think we need to go ahead with this change, or can we add
>> it later as an
>> improvement?
>
> The opaque in populate_phys() is already used for something else
> (i.e. the argument for the free callback of the memory chunk).
> I'm afraid it could cause confusion to have it used for 2 different
> things.
>
> About the change, I think it could be good to have it in 16.11,
> because it will probably change the API, and we should avoid to
> change it each version ;)
>
> So I'd vote to have it in the patchset for consistency.

Sure, we should avoid breaking API just after we created it. :)

OK, here's a slightly different proposal.

If we add a string, to the _ops_byname, yes, that will work for Jan's case.
However, if we add a void*, that allow us the flexibility of passing 
anything we
want. We can then store the void* in the mempool struct as void 
*pool_config,
so that when the alloc gets called, it can access whatever is stored at 
*pool_config
and do the correct initialisation/allocation. In Jan's use case, this 
can simply be typecast
to a string. In future cases, it can be a struct, which could including 
new flags.

I think that change is minimal enough to be low risk at this stage.

Thoughts?

Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v12 0/3] mempool: add external mempool manager
  2016-06-15 14:02                                 ` Hunt, David
@ 2016-06-15 14:10                                   ` Olivier MATZ
  2016-06-15 14:47                                     ` Jan Viktorin
  0 siblings, 1 reply; 238+ messages in thread
From: Olivier MATZ @ 2016-06-15 14:10 UTC (permalink / raw)
  To: Hunt, David, Jan Viktorin; +Cc: dev, jerin.jacob, shreyansh.jain



On 06/15/2016 04:02 PM, Hunt, David wrote:
>
>
> On 15/6/2016 2:50 PM, Olivier MATZ wrote:
>> Hi David,
>>
>> On 06/15/2016 02:38 PM, Hunt, David wrote:
>>>
>>>
>>> On 15/6/2016 1:03 PM, Olivier MATZ wrote:
>>>> Hi,
>>>>
>>>> On 06/15/2016 01:47 PM, Hunt, David wrote:
>>>>>
>>>>>
>>>>> On 15/6/2016 11:13 AM, Jan Viktorin wrote:
>>>>>> Hi,
>>>>>>
>>>>>> I've got one last question. Initially, I was interested in creating
>>>>>> my own external memory provider based on a Linux Kernel driver.
>>>>>> So, I've got an opened file descriptor that points to a device which
>>>>>> can mmap a memory regions for me.
>>>>>>
>>>>>> ...
>>>>>> int fd = open("/dev/uio0" ...);
>>>>>> ...
>>>>>> rte_mempool *pool = rte_mempool_create_empty(...);
>>>>>> rte_mempool_set_ops_byname(pool, "uio_allocator_ops");
>>>>>>
>>>>>> I am not sure how to pass the file descriptor pointer. I thought it
>>>>>> would
>>>>>> be possible by the rte_mempool_alloc but it's not... Is it possible
>>>>>> to solve this case?
>>>>>>
>>>>>> The allocator is device-specific.
>>>>>>
>>>>>> Regards
>>>>>> Jan
>>>>>
>>>>> This particular use case is not covered.
>>>>>
>>>>> We did discuss this before, and an opaque pointer was proposed, but
>>>>> did
>>>>> not make it in.
>>>>> http://article.gmane.org/gmane.comp.networking.dpdk.devel/39821
>>>>> (and following emails in that thread)
>>>>>
>>>>> So, the options for this use case are as follows:
>>>>> 1. Use the pool_data to pass data in to the alloc, then set the
>>>>> pool_data pointer before coming back from alloc. (It's a bit of a
>>>>> hack,
>>>>> but means no code change).
>>>>> 2. Add an extra parameter to the alloc function. The simplest way I
>>>>> can
>>>>> think of doing this is to
>>>>> take the *opaque passed into rte_mempool_populate_phys, and pass it on
>>>>> into the alloc function.
>>>>> This will have minimal impact on the public API,s as there is
>>>>> already an
>>>>> opaque there in the _populate_ funcs, we're just
>>>>> reusing it for the alloc.
>>>>>
>>>>> Do others think option 2 is OK to add this at this late stage? Even if
>>>>> the patch set has already been ACK'd?
>>>>
>>>> Jan's use-case looks to be relevant.
>>>>
>>>> What about changing:
>>>>
>>>>   rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)
>>>>
>>>> into:
>>>>
>>>>  rte_mempool_set_ops(struct rte_mempool *mp, const char *name,
>>>>     void *opaque)
>>>>
>>>> ?
>>>>
>>>> The opaque pointer would be saved in mempool structure, and used
>>>> when the mempool is populated (calling mempool_ops_alloc).
>>>> The type of the structure pointed by the opaque has to be defined
>>>> (and documented) into each mempool_ops manager.
>>>>
>>>
>>> Yes, that was another option, which has the additional impact of
>>> needing an
>>> opaque added to the mempool struct. If we use the opaque from the
>>> _populate_
>>> function, we use it straight away in the alloc, no storage needed.
>>>
>>> Also, do you think we need to go ahead with this change, or can we add
>>> it later as an
>>> improvement?
>>
>> The opaque in populate_phys() is already used for something else
>> (i.e. the argument for the free callback of the memory chunk).
>> I'm afraid it could cause confusion to have it used for 2 different
>> things.
>>
>> About the change, I think it could be good to have it in 16.11,
>> because it will probably change the API, and we should avoid to
>> change it each version ;)
>>
>> So I'd vote to have it in the patchset for consistency.
>
> Sure, we should avoid breaking API just after we created it. :)
>
> OK, here's a slightly different proposal.
>
> If we add a string, to the _ops_byname, yes, that will work for Jan's case.
> However, if we add a void*, that allow us the flexibility of passing
> anything we
> want. We can then store the void* in the mempool struct as void
> *pool_config,
> so that when the alloc gets called, it can access whatever is stored at
> *pool_config
> and do the correct initialisation/allocation. In Jan's use case, this
> can simply be typecast
> to a string. In future cases, it can be a struct, which could including
> new flags.

Yep, agree. But not sure I'm seeing the difference with what I
proposed.

>
> I think that change is minimal enough to be low risk at this stage.
>
> Thoughts?

Agree. Thanks!


Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v12 0/3] mempool: add external mempool manager
  2016-06-15 14:10                                   ` Olivier MATZ
@ 2016-06-15 14:47                                     ` Jan Viktorin
  2016-06-15 16:03                                       ` Hunt, David
  0 siblings, 1 reply; 238+ messages in thread
From: Jan Viktorin @ 2016-06-15 14:47 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: Hunt, David, dev, jerin.jacob, shreyansh.jain

On Wed, 15 Jun 2016 16:10:13 +0200
Olivier MATZ <olivier.matz@6wind.com> wrote:

> On 06/15/2016 04:02 PM, Hunt, David wrote:
> >
> >
> > On 15/6/2016 2:50 PM, Olivier MATZ wrote:  
> >> Hi David,
> >>
> >> On 06/15/2016 02:38 PM, Hunt, David wrote:  
> >>>
> >>>
> >>> On 15/6/2016 1:03 PM, Olivier MATZ wrote:  
> >>>> Hi,
> >>>>
> >>>> On 06/15/2016 01:47 PM, Hunt, David wrote:  
> >>>>>
> >>>>>
> >>>>> On 15/6/2016 11:13 AM, Jan Viktorin wrote:  
> >>>>>> Hi,
> >>>>>>
> >>>>>> I've got one last question. Initially, I was interested in creating
> >>>>>> my own external memory provider based on a Linux Kernel driver.
> >>>>>> So, I've got an opened file descriptor that points to a device which
> >>>>>> can mmap a memory regions for me.
> >>>>>>
> >>>>>> ...
> >>>>>> int fd = open("/dev/uio0" ...);
> >>>>>> ...
> >>>>>> rte_mempool *pool = rte_mempool_create_empty(...);
> >>>>>> rte_mempool_set_ops_byname(pool, "uio_allocator_ops");
> >>>>>>
> >>>>>> I am not sure how to pass the file descriptor pointer. I thought it
> >>>>>> would
> >>>>>> be possible by the rte_mempool_alloc but it's not... Is it possible
> >>>>>> to solve this case?
> >>>>>>
> >>>>>> The allocator is device-specific.
> >>>>>>
> >>>>>> Regards
> >>>>>> Jan  
> >>>>>
> >>>>> This particular use case is not covered.
> >>>>>
> >>>>> We did discuss this before, and an opaque pointer was proposed, but
> >>>>> did
> >>>>> not make it in.
> >>>>> http://article.gmane.org/gmane.comp.networking.dpdk.devel/39821
> >>>>> (and following emails in that thread)
> >>>>>
> >>>>> So, the options for this use case are as follows:
> >>>>> 1. Use the pool_data to pass data in to the alloc, then set the
> >>>>> pool_data pointer before coming back from alloc. (It's a bit of a
> >>>>> hack,
> >>>>> but means no code change).
> >>>>> 2. Add an extra parameter to the alloc function. The simplest way I
> >>>>> can
> >>>>> think of doing this is to
> >>>>> take the *opaque passed into rte_mempool_populate_phys, and pass it on
> >>>>> into the alloc function.
> >>>>> This will have minimal impact on the public API,s as there is
> >>>>> already an
> >>>>> opaque there in the _populate_ funcs, we're just
> >>>>> reusing it for the alloc.
> >>>>>
> >>>>> Do others think option 2 is OK to add this at this late stage? Even if
> >>>>> the patch set has already been ACK'd?  
> >>>>
> >>>> Jan's use-case looks to be relevant.
> >>>>
> >>>> What about changing:
> >>>>
> >>>>   rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)
> >>>>
> >>>> into:
> >>>>
> >>>>  rte_mempool_set_ops(struct rte_mempool *mp, const char *name,
> >>>>     void *opaque)

Or a third function?

rte_mempool_set_ops_arg(struct rte_mempool, *mp, void *arg)

Or isn't it really a task for a kind of rte_mempool_populate_*?

This is a part of mempool I am not involved in yet.

> >>>>
> >>>> ?
> >>>>
> >>>> The opaque pointer would be saved in mempool structure, and used
> >>>> when the mempool is populated (calling mempool_ops_alloc).
> >>>> The type of the structure pointed by the opaque has to be defined
> >>>> (and documented) into each mempool_ops manager.
> >>>>  
> >>>
> >>> Yes, that was another option, which has the additional impact of
> >>> needing an
> >>> opaque added to the mempool struct. If we use the opaque from the
> >>> _populate_
> >>> function, we use it straight away in the alloc, no storage needed.
> >>>
> >>> Also, do you think we need to go ahead with this change, or can we add
> >>> it later as an
> >>> improvement?  
> >>
> >> The opaque in populate_phys() is already used for something else
> >> (i.e. the argument for the free callback of the memory chunk).
> >> I'm afraid it could cause confusion to have it used for 2 different
> >> things.
> >>
> >> About the change, I think it could be good to have it in 16.11,
> >> because it will probably change the API, and we should avoid to
> >> change it each version ;)
> >>
> >> So I'd vote to have it in the patchset for consistency.  
> >
> > Sure, we should avoid breaking API just after we created it. :)
> >
> > OK, here's a slightly different proposal.
> >
> > If we add a string, to the _ops_byname, yes, that will work for Jan's case.

A string? No, I needed to pass a file descriptor or a pointer to some rte_device
or something like that. So a void * is a way to go.

> > However, if we add a void*, that allow us the flexibility of passing
> > anything we
> > want. We can then store the void* in the mempool struct as void
> > *pool_config,

void *ops_context, ops_args, ops_data, ...

> > so that when the alloc gets called, it can access whatever is stored at
> > *pool_config
> > and do the correct initialisation/allocation. In Jan's use case, this
> > can simply be typecast
> > to a string. In future cases, it can be a struct, which could including
> > new flags.

New flags? Does it mean an API extension?
  
> 
> Yep, agree. But not sure I'm seeing the difference with what I
> proposed.

Me neither... I think it is exactly the same :).

Jan

> 
> >
> > I think that change is minimal enough to be low risk at this stage.
> >
> > Thoughts?  
> 
> Agree. Thanks!
> 
> 
> Olivier



-- 
   Jan Viktorin                  E-mail: Viktorin@RehiveTech.com
   System Architect              Web:    www.RehiveTech.com
   RehiveTech
   Brno, Czech Republic

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v12 0/3] mempool: add external mempool manager
  2016-06-15 14:47                                     ` Jan Viktorin
@ 2016-06-15 16:03                                       ` Hunt, David
  0 siblings, 0 replies; 238+ messages in thread
From: Hunt, David @ 2016-06-15 16:03 UTC (permalink / raw)
  To: Jan Viktorin, Olivier MATZ; +Cc: dev, jerin.jacob, shreyansh.jain



On 15/6/2016 3:47 PM, Jan Viktorin wrote:
> On Wed, 15 Jun 2016 16:10:13 +0200
> Olivier MATZ <olivier.matz@6wind.com> wrote:
>
>> On 06/15/2016 04:02 PM, Hunt, David wrote:
>>>
>>> On 15/6/2016 2:50 PM, Olivier MATZ wrote:
>>>> Hi David,
>>>>
>>>> On 06/15/2016 02:38 PM, Hunt, David wrote:
>>>>>
>>>>> On 15/6/2016 1:03 PM, Olivier MATZ wrote:
>>>>>> Hi,
>>>>>>
>>>>>> On 06/15/2016 01:47 PM, Hunt, David wrote:
>>>>>>>
>>>>>>> On 15/6/2016 11:13 AM, Jan Viktorin wrote:
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> I've got one last question. Initially, I was interested in creating
>>>>>>>> my own external memory provider based on a Linux Kernel driver.
>>>>>>>> So, I've got an opened file descriptor that points to a device which
>>>>>>>> can mmap a memory regions for me.
>>>>>>>>
>>>>>>>> ...
>>>>>>>> int fd = open("/dev/uio0" ...);
>>>>>>>> ...
>>>>>>>> rte_mempool *pool = rte_mempool_create_empty(...);
>>>>>>>> rte_mempool_set_ops_byname(pool, "uio_allocator_ops");
>>>>>>>>
>>>>>>>> I am not sure how to pass the file descriptor pointer. I thought it
>>>>>>>> would
>>>>>>>> be possible by the rte_mempool_alloc but it's not... Is it possible
>>>>>>>> to solve this case?
>>>>>>>>
>>>>>>>> The allocator is device-specific.
>>>>>>>>
>>>>>>>> Regards
>>>>>>>> Jan
>>>>>>> This particular use case is not covered.
>>>>>>>
>>>>>>> We did discuss this before, and an opaque pointer was proposed, but
>>>>>>> did
>>>>>>> not make it in.
>>>>>>> http://article.gmane.org/gmane.comp.networking.dpdk.devel/39821
>>>>>>> (and following emails in that thread)
>>>>>>>
>>>>>>> So, the options for this use case are as follows:
>>>>>>> 1. Use the pool_data to pass data in to the alloc, then set the
>>>>>>> pool_data pointer before coming back from alloc. (It's a bit of a
>>>>>>> hack,
>>>>>>> but means no code change).
>>>>>>> 2. Add an extra parameter to the alloc function. The simplest way I
>>>>>>> can
>>>>>>> think of doing this is to
>>>>>>> take the *opaque passed into rte_mempool_populate_phys, and pass it on
>>>>>>> into the alloc function.
>>>>>>> This will have minimal impact on the public API,s as there is
>>>>>>> already an
>>>>>>> opaque there in the _populate_ funcs, we're just
>>>>>>> reusing it for the alloc.
>>>>>>>
>>>>>>> Do others think option 2 is OK to add this at this late stage? Even if
>>>>>>> the patch set has already been ACK'd?
>>>>>> Jan's use-case looks to be relevant.
>>>>>>
>>>>>> What about changing:
>>>>>>
>>>>>>    rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)
>>>>>>
>>>>>> into:
>>>>>>
>>>>>>   rte_mempool_set_ops(struct rte_mempool *mp, const char *name,
>>>>>>      void *opaque)
> Or a third function?
>
> rte_mempool_set_ops_arg(struct rte_mempool, *mp, void *arg)

I think if we tried to add another function, there would be some 
opposition to that.
I think it's reasonable to add it to set_ops_byname, as we're setting 
the ops and
the ops_args in the same call. We use them later in the alloc.

>
> Or isn't it really a task for a kind of rte_mempool_populate_*?

I was leaning towards that, but a different proposal was suggested. I'm 
OK with
adding the *opaque to set_ops_byname

> This is a part of mempool I am not involved in yet.
>
>>>>>> ?
>>>>>>
>>>>>> The opaque pointer would be saved in mempool structure, and used
>>>>>> when the mempool is populated (calling mempool_ops_alloc).
>>>>>> The type of the structure pointed by the opaque has to be defined
>>>>>> (and documented) into each mempool_ops manager.
>>>>>>   
>>>>> Yes, that was another option, which has the additional impact of
>>>>> needing an
>>>>> opaque added to the mempool struct. If we use the opaque from the
>>>>> _populate_
>>>>> function, we use it straight away in the alloc, no storage needed.
>>>>>
>>>>> Also, do you think we need to go ahead with this change, or can we add
>>>>> it later as an
>>>>> improvement?
>>>> The opaque in populate_phys() is already used for something else
>>>> (i.e. the argument for the free callback of the memory chunk).
>>>> I'm afraid it could cause confusion to have it used for 2 different
>>>> things.
>>>>
>>>> About the change, I think it could be good to have it in 16.11,
>>>> because it will probably change the API, and we should avoid to
>>>> change it each version ;)
>>>>
>>>> So I'd vote to have it in the patchset for consistency.
>>> Sure, we should avoid breaking API just after we created it. :)
>>>
>>> OK, here's a slightly different proposal.
>>>
>>> If we add a string, to the _ops_byname, yes, that will work for Jan's case.
> A string? No, I needed to pass a file descriptor or a pointer to some rte_device
> or something like that. So a void * is a way to go.

Apologies, I misread. *opaque it is.

>>> However, if we add a void*, that allow us the flexibility of passing
>>> anything we
>>> want. We can then store the void* in the mempool struct as void
>>> *pool_config,
> void *ops_context, ops_args, ops_data, ...

I think I'll go with ops_args

>>> so that when the alloc gets called, it can access whatever is stored at
>>> *pool_config
>>> and do the correct initialisation/allocation. In Jan's use case, this
>>> can simply be typecast
>>> to a string. In future cases, it can be a struct, which could including
>>> new flags.
> New flags? Does it mean an API extension?

No, I mean new flags within the opaque data, hidden from the mempool 
manager.

>    
>> Yep, agree. But not sure I'm seeing the difference with what I
>> proposed.
> Me neither... I think it is exactly the same :).
>
> Jan

Yes, misread on my part.

>>> I think that change is minimal enough to be low risk at this stage.
>>>
>>> Thoughts?
>> Agree. Thanks!
>>
>>
>> Olivier
>
>

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v12 0/3] mempool: add external mempool manager
  2016-06-15 12:03                           ` Olivier MATZ
  2016-06-15 12:38                             ` Hunt, David
@ 2016-06-15 16:34                             ` Hunt, David
  2016-06-15 16:40                               ` Olivier MATZ
  1 sibling, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-06-15 16:34 UTC (permalink / raw)
  To: Olivier MATZ, Jan Viktorin; +Cc: dev, jerin.jacob, shreyansh.jain



On 15/6/2016 1:03 PM, Olivier MATZ wrote:
> [...]
>
> The opaque pointer would be saved in mempool structure, and used
> when the mempool is populated (calling mempool_ops_alloc).
> The type of the structure pointed by the opaque has to be defined
> (and documented) into each mempool_ops manager.
>
>
> Olivier


OK, just to be sure before I post another patchset.....

For the rte_mempool_struct:
         struct rte_mempool_memhdr_list mem_list; /**< List of memory 
chunks */
+       void *ops_args;                  /**< optional args for ops 
alloc. */

(at the end of the struct, as it's just on the control path, not to 
affect fast path)

Then change function params:
  int
-rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name);
+rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
+               void *ops_args);

And (almost) finally in the rte_mempool_set_ops_byname function:
         mp->ops_index = i;
+       mp->ops_args = ops_args;
         return 0;

Then (actually) finally, add a null to all the calls to 
rte_mempool_set_ops_byname.

OK? :)

Regards,
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v12 0/3] mempool: add external mempool manager
  2016-06-15 16:34                             ` Hunt, David
@ 2016-06-15 16:40                               ` Olivier MATZ
  2016-06-16  4:35                                 ` Shreyansh Jain
  2016-06-16  7:47                                 ` Hunt, David
  0 siblings, 2 replies; 238+ messages in thread
From: Olivier MATZ @ 2016-06-15 16:40 UTC (permalink / raw)
  To: Hunt, David, Jan Viktorin; +Cc: dev, jerin.jacob, shreyansh.jain



On 06/15/2016 06:34 PM, Hunt, David wrote:
>
>
> On 15/6/2016 1:03 PM, Olivier MATZ wrote:
>> [...]
>>
>> The opaque pointer would be saved in mempool structure, and used
>> when the mempool is populated (calling mempool_ops_alloc).
>> The type of the structure pointed by the opaque has to be defined
>> (and documented) into each mempool_ops manager.
>>
>>
>> Olivier
>
>
> OK, just to be sure before I post another patchset.....
>
> For the rte_mempool_struct:
>          struct rte_mempool_memhdr_list mem_list; /**< List of memory
> chunks */
> +       void *ops_args;                  /**< optional args for ops
> alloc. */
>
> (at the end of the struct, as it's just on the control path, not to
> affect fast path)

Hmm, I would put it just after pool_data.


>
> Then change function params:
>   int
> -rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name);
> +rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
> +               void *ops_args);
>
> And (almost) finally in the rte_mempool_set_ops_byname function:
>          mp->ops_index = i;
> +       mp->ops_args = ops_args;
>          return 0;
>
> Then (actually) finally, add a null to all the calls to
> rte_mempool_set_ops_byname.
>
> OK? :)
>

Else, looks good to me! Thanks David.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v12 0/3] mempool: add external mempool manager
  2016-06-15 16:40                               ` Olivier MATZ
@ 2016-06-16  4:35                                 ` Shreyansh Jain
  2016-06-16  7:04                                   ` Hunt, David
  2016-06-16  7:47                                 ` Hunt, David
  1 sibling, 1 reply; 238+ messages in thread
From: Shreyansh Jain @ 2016-06-16  4:35 UTC (permalink / raw)
  To: Olivier MATZ, Hunt, David, Jan Viktorin; +Cc: dev, jerin.jacob

Though I am late to the discussion...

> -----Original Message-----
> From: Olivier MATZ [mailto:olivier.matz@6wind.com]
> Sent: Wednesday, June 15, 2016 10:10 PM
> To: Hunt, David <david.hunt@intel.com>; Jan Viktorin
> <viktorin@rehivetech.com>
> Cc: dev@dpdk.org; jerin.jacob@caviumnetworks.com; Shreyansh Jain
> <shreyansh.jain@nxp.com>
> Subject: Re: [PATCH v12 0/3] mempool: add external mempool manager
> 
> 
> 
> On 06/15/2016 06:34 PM, Hunt, David wrote:
> >
> >
> > On 15/6/2016 1:03 PM, Olivier MATZ wrote:
> >> [...]
> >>
> >> The opaque pointer would be saved in mempool structure, and used
> >> when the mempool is populated (calling mempool_ops_alloc).
> >> The type of the structure pointed by the opaque has to be defined
> >> (and documented) into each mempool_ops manager.
> >>
> >>
> >> Olivier
> >
> >
> > OK, just to be sure before I post another patchset.....
> >
> > For the rte_mempool_struct:
> >          struct rte_mempool_memhdr_list mem_list; /**< List of memory
> > chunks */
> > +       void *ops_args;                  /**< optional args for ops
> > alloc. */
> >
> > (at the end of the struct, as it's just on the control path, not to
> > affect fast path)
> 
> Hmm, I would put it just after pool_data.

+1
And, would 'pool_config' (picked from a previous email from David) a better name?

>From a user perspective, the application is passing a configuration item to the pool to work one. Only the application and mempool allocator understand it (opaque).
As for 'ops_arg', it would be to control 'assignment-of-operations' to the framework.

Maybe just my point of view.

> 
> 
> >
> > Then change function params:
> >   int
> > -rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name);
> > +rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
> > +               void *ops_args);
> >
> > And (almost) finally in the rte_mempool_set_ops_byname function:
> >          mp->ops_index = i;
> > +       mp->ops_args = ops_args;
> >          return 0;
> >
> > Then (actually) finally, add a null to all the calls to
> > rte_mempool_set_ops_byname.
> >
> > OK? :)
> >
> 
> Else, looks good to me! Thanks David.

Me too. Though I would like to clarify something for my understanding:

Mempool->pool_data => Used by allocator to store private data
Mempool->pool_config => (or ops_arg) used by allocator to access user/app provided value.

Is that correct?

-
Shreyansh

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v12 0/3] mempool: add external mempool manager
  2016-06-16  4:35                                 ` Shreyansh Jain
@ 2016-06-16  7:04                                   ` Hunt, David
  0 siblings, 0 replies; 238+ messages in thread
From: Hunt, David @ 2016-06-16  7:04 UTC (permalink / raw)
  To: Shreyansh Jain, Olivier MATZ, Jan Viktorin; +Cc: dev, jerin.jacob

Hi Shreyansh,

On 16/6/2016 5:35 AM, Shreyansh Jain wrote:
> Though I am late to the discussion...
>
>> -----Original Message-----
>> From: Olivier MATZ [mailto:olivier.matz@6wind.com]
>> Sent: Wednesday, June 15, 2016 10:10 PM
>> To: Hunt, David <david.hunt@intel.com>; Jan Viktorin
>> <viktorin@rehivetech.com>
>> Cc: dev@dpdk.org; jerin.jacob@caviumnetworks.com; Shreyansh Jain
>> <shreyansh.jain@nxp.com>
>> Subject: Re: [PATCH v12 0/3] mempool: add external mempool manager
>>
>>
>>
>> On 06/15/2016 06:34 PM, Hunt, David wrote:
>>>
>>> On 15/6/2016 1:03 PM, Olivier MATZ wrote:
>>>> [...]
>>>>
>>>> The opaque pointer would be saved in mempool structure, and used
>>>> when the mempool is populated (calling mempool_ops_alloc).
>>>> The type of the structure pointed by the opaque has to be defined
>>>> (and documented) into each mempool_ops manager.
>>>>
>>>>
>>>> Olivier
>>>
>>> OK, just to be sure before I post another patchset.....
>>>
>>> For the rte_mempool_struct:
>>>           struct rte_mempool_memhdr_list mem_list; /**< List of memory
>>> chunks */
>>> +       void *ops_args;                  /**< optional args for ops
>>> alloc. */
>>>
>>> (at the end of the struct, as it's just on the control path, not to
>>> affect fast path)
>> Hmm, I would put it just after pool_data.
> +1
> And, would 'pool_config' (picked from a previous email from David) a better name?
>
>  From a user perspective, the application is passing a configuration item to the pool to work one. Only the application and mempool allocator understand it (opaque).
> As for 'ops_arg', it would be to control 'assignment-of-operations' to the framework.
>
> Maybe just my point of view.

I agree. I was originally happy with pool_config, sits well with 
pool_data. And it's data for configuring the pool during allocation. 
I'll go with that, then.


>>> Then change function params:
>>>    int
>>> -rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name);
>>> +rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
>>> +               void *ops_args);
>>>
>>> And (almost) finally in the rte_mempool_set_ops_byname function:
>>>           mp->ops_index = i;
>>> +       mp->ops_args = ops_args;
>>>           return 0;
>>>
>>> Then (actually) finally, add a null to all the calls to
>>> rte_mempool_set_ops_byname.
>>>
>>> OK? :)
>>>
>> Else, looks good to me! Thanks David.
> Me too. Though I would like to clarify something for my understanding:
>
> Mempool->pool_data => Used by allocator to store private data
> Mempool->pool_config => (or ops_arg) used by allocator to access user/app provided value.
>
> Is that correct?

Yes, that's correct.

Regard,
David.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v12 0/3] mempool: add external mempool manager
  2016-06-15 16:40                               ` Olivier MATZ
  2016-06-16  4:35                                 ` Shreyansh Jain
@ 2016-06-16  7:47                                 ` Hunt, David
  2016-06-16  8:47                                   ` Olivier MATZ
  1 sibling, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-06-16  7:47 UTC (permalink / raw)
  To: Olivier MATZ, Jan Viktorin; +Cc: dev, jerin.jacob, shreyansh.jain



On 15/6/2016 5:40 PM, Olivier MATZ wrote:
>
>
> On 06/15/2016 06:34 PM, Hunt, David wrote:
>>
>>
>> On 15/6/2016 1:03 PM, Olivier MATZ wrote:
>>> [...]
>>>
>>> The opaque pointer would be saved in mempool structure, and used
>>> when the mempool is populated (calling mempool_ops_alloc).
>>> The type of the structure pointed by the opaque has to be defined
>>> (and documented) into each mempool_ops manager.
>>>
>>>
>>> Olivier
>>
>>
>> OK, just to be sure before I post another patchset.....
>>
>> For the rte_mempool_struct:
>>          struct rte_mempool_memhdr_list mem_list; /**< List of memory
>> chunks */
>> +       void *ops_args;                  /**< optional args for ops
>> alloc. */
>>
>> (at the end of the struct, as it's just on the control path, not to
>> affect fast path)
>
> Hmm, I would put it just after pool_data.
>

When I move it to just after pool data, the performance of the 
mempool_perf_autotest drops by 2% on my machine for the local cache tests.
I think I should leave it where I suggested.

Regards,
David.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v12 0/3] mempool: add external mempool manager
  2016-06-16  7:47                                 ` Hunt, David
@ 2016-06-16  8:47                                   ` Olivier MATZ
  2016-06-16  8:55                                     ` Hunt, David
  0 siblings, 1 reply; 238+ messages in thread
From: Olivier MATZ @ 2016-06-16  8:47 UTC (permalink / raw)
  To: Hunt, David, Jan Viktorin; +Cc: dev, jerin.jacob, shreyansh.jain



On 06/16/2016 09:47 AM, Hunt, David wrote:
>
>
> On 15/6/2016 5:40 PM, Olivier MATZ wrote:
>>
>>
>> On 06/15/2016 06:34 PM, Hunt, David wrote:
>>>
>>>
>>> On 15/6/2016 1:03 PM, Olivier MATZ wrote:
>>>> [...]
>>>>
>>>> The opaque pointer would be saved in mempool structure, and used
>>>> when the mempool is populated (calling mempool_ops_alloc).
>>>> The type of the structure pointed by the opaque has to be defined
>>>> (and documented) into each mempool_ops manager.
>>>>
>>>>
>>>> Olivier
>>>
>>>
>>> OK, just to be sure before I post another patchset.....
>>>
>>> For the rte_mempool_struct:
>>>          struct rte_mempool_memhdr_list mem_list; /**< List of memory
>>> chunks */
>>> +       void *ops_args;                  /**< optional args for ops
>>> alloc. */
>>>
>>> (at the end of the struct, as it's just on the control path, not to
>>> affect fast path)
>>
>> Hmm, I would put it just after pool_data.
>>
>
> When I move it to just after pool data, the performance of the
> mempool_perf_autotest drops by 2% on my machine for the local cache tests.
> I think I should leave it where I suggested.

I don't really see what you call control path and data path here.
For me, all the fields in mempool structure are not modified once
the mempool is initialized.

http://dpdk.org/browse/dpdk/tree/lib/librte_mempool/rte_mempool.h?id=ce94a51ff05c0a4b63177f8a314feb5d19992036#n201

So I don't think we should have more cache misses whether it's
placed at the beginning or at the end. Maybe I'm missing something...

I still believe it's better to group the 2 fields as they are
tightly linked together. It could be at the end if you see better
performance.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v12 0/3] mempool: add external mempool manager
  2016-06-16  8:47                                   ` Olivier MATZ
@ 2016-06-16  8:55                                     ` Hunt, David
  2016-06-16  8:58                                       ` Olivier MATZ
  0 siblings, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-06-16  8:55 UTC (permalink / raw)
  To: Olivier MATZ, Jan Viktorin; +Cc: dev, jerin.jacob, shreyansh.jain



On 16/6/2016 9:47 AM, Olivier MATZ wrote:
>
>
> On 06/16/2016 09:47 AM, Hunt, David wrote:
>>
>>
>> On 15/6/2016 5:40 PM, Olivier MATZ wrote:
>>>
>>>
>>> On 06/15/2016 06:34 PM, Hunt, David wrote:
>>>>
>>>>
>>>> On 15/6/2016 1:03 PM, Olivier MATZ wrote:
>>>>> [...]
>>>>>
>>>>> The opaque pointer would be saved in mempool structure, and used
>>>>> when the mempool is populated (calling mempool_ops_alloc).
>>>>> The type of the structure pointed by the opaque has to be defined
>>>>> (and documented) into each mempool_ops manager.
>>>>>
>>>>>
>>>>> Olivier
>>>>
>>>>
>>>> OK, just to be sure before I post another patchset.....
>>>>
>>>> For the rte_mempool_struct:
>>>>          struct rte_mempool_memhdr_list mem_list; /**< List of memory
>>>> chunks */
>>>> +       void *ops_args;                  /**< optional args for ops
>>>> alloc. */
>>>>
>>>> (at the end of the struct, as it's just on the control path, not to
>>>> affect fast path)
>>>
>>> Hmm, I would put it just after pool_data.
>>>
>>
>> When I move it to just after pool data, the performance of the
>> mempool_perf_autotest drops by 2% on my machine for the local cache 
>> tests.
>> I think I should leave it where I suggested.
>
> I don't really see what you call control path and data path here.
> For me, all the fields in mempool structure are not modified once
> the mempool is initialized.
>
> http://dpdk.org/browse/dpdk/tree/lib/librte_mempool/rte_mempool.h?id=ce94a51ff05c0a4b63177f8a314feb5d19992036#n201 
>
>
> So I don't think we should have more cache misses whether it's
> placed at the beginning or at the end. Maybe I'm missing something...
>
> I still believe it's better to group the 2 fields as they are
> tightly linked together. It could be at the end if you see better
> performance.
>

OK, I'll leave at the end because of the performance hit.

Regards,
David.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v12 0/3] mempool: add external mempool manager
  2016-06-16  8:55                                     ` Hunt, David
@ 2016-06-16  8:58                                       ` Olivier MATZ
  2016-06-16 11:34                                         ` Hunt, David
  0 siblings, 1 reply; 238+ messages in thread
From: Olivier MATZ @ 2016-06-16  8:58 UTC (permalink / raw)
  To: Hunt, David, Jan Viktorin; +Cc: dev, jerin.jacob, shreyansh.jain

>>
>> So I don't think we should have more cache misses whether it's
>> placed at the beginning or at the end. Maybe I'm missing something...
>>
>> I still believe it's better to group the 2 fields as they are
>> tightly linked together. It could be at the end if you see better
>> performance.
>>
>
> OK, I'll leave at the end because of the performance hit.

Sorry, my message was not clear.
I mean, having both at the end. Do you see a performance
impact in that case?


Regards
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v12 0/3] mempool: add external mempool manager
  2016-06-16  8:58                                       ` Olivier MATZ
@ 2016-06-16 11:34                                         ` Hunt, David
  0 siblings, 0 replies; 238+ messages in thread
From: Hunt, David @ 2016-06-16 11:34 UTC (permalink / raw)
  To: Olivier MATZ, Jan Viktorin; +Cc: dev, jerin.jacob, shreyansh.jain



On 16/6/2016 9:58 AM, Olivier MATZ wrote:
>>>
>>> So I don't think we should have more cache misses whether it's
>>> placed at the beginning or at the end. Maybe I'm missing something...
>>>
>>> I still believe it's better to group the 2 fields as they are
>>> tightly linked together. It could be at the end if you see better
>>> performance.
>>>
>>
>> OK, I'll leave at the end because of the performance hit.
>
> Sorry, my message was not clear.
> I mean, having both at the end. Do you see a performance
> impact in that case?
>

I ran multiple more tests, and average drop I'm seeing on an older 
server reduced to 1% average (local cached use-case), with 0% change on 
a newer Haswell server, so I think at this stage we're safe to put it up 
alongside pool_data. There was 0% reduction when I moved both to the 
bottom of the struct. So on the Haswell, it seems to have minimal impact 
regardless of where they go.

I'll post the patch up soon.

Regards,
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v13 0/3] mempool: add external mempool manager
  2016-06-15  7:47                     ` [PATCH v12 0/3] mempool: add external mempool manager David Hunt
                                         ` (3 preceding siblings ...)
  2016-06-15 10:13                       ` [PATCH v12 0/3] mempool: add external mempool manager Jan Viktorin
@ 2016-06-16 12:30                       ` David Hunt
  2016-06-16 12:30                         ` [PATCH v13 1/3] mempool: support external mempool operations David Hunt
                                           ` (3 more replies)
  4 siblings, 4 replies; 238+ messages in thread
From: David Hunt @ 2016-06-16 12:30 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain

Here's the latest version of the External Mempool Manager patchset.
It's re-based on top of the latest head as of 15/6/2016, including
Olivier's 35-part patch series on mempool re-org [1]

[1] http://dpdk.org/ml/archives/dev/2016-May/039229.html

v13 changes:

 * Added in extra opaque data (pool_config) to mempool struct for mempool
   configuration by the ops functions. For example, this can be used to pass
  device names or device flags to the underlying alloc function.
 * Added mempool_config param to rte_mempool_set_ops_byname()

v12 changes:

 * Fixed a comment (function pram h -> ops)
 * fixed a typo (callbacki)

v11 changes:

 * Fixed comments (added '.' where needed for consistency)
 * removed ABI breakage notice for mempool manager in deprecation.rst
 * Added description of the external mempool manager functionality to
   doc/guides/prog_guide/mempool_lib.rst (John Mc reviewed)
 * renamed rte_mempool_default.c to rte_mempool_ring.c

v10 changes:

 * changed the _put/_get op names to _enqueue/_dequeue to be consistent
   with the function names
 * some rte_errno cleanup
 * comment tweaks about when to set pool_data
 * removed an un-needed check for ops->alloc == NULL

v9 changes:

 * added a check for NULL alloc in rte_mempool_ops_register
 * rte_mempool_alloc_t now returns int instead of void*
 * fixed some comment typo's
 * removed some unneeded typecasts
 * changed a return NULL to return -EEXIST in rte_mempool_ops_register
 * fixed rte_mempool_version.map file so builds ok as shared libs
 * moved flags check from rte_mempool_create_empty to rte_mempool_create

v8 changes:

 * merged first three patches in the series into one.
 * changed parameters to ops callback to all be rte_mempool pointer
   rather than than pointer to opaque data or uint64.
 * comment fixes.
 * fixed parameter to _free function (was inconsistent).
 * changed MEMPOOL_F_RING_CREATED to MEMPOOL_F_POOL_CREATED

v7 changes:

 * Changed rte_mempool_handler_table to rte_mempool_ops_table
 * Changed hander_idx to ops_index in rte_mempool struct
 * Reworked comments in rte_mempool.h around ops functions
 * Changed rte_mempool_hander.c to rte_mempool_ops.c
 * Changed all functions containing _handler_ to _ops_
 * Now there is no mention of 'handler' left
 * Other small changes out of review of mailing list

v6 changes:

 * Moved the flags handling from rte_mempool_create_empty to
   rte_mempool_create, as it's only there for backward compatibility
 * Various comment additions and cleanup
 * Renamed rte_mempool_handler to rte_mempool_ops
 * Added a union for *pool and u64 pool_id in struct rte_mempool
 * split the original patch into a few parts for easier review.
 * rename functions with _ext_ to _ops_.
 * addressed review comments
 * renamed put and get functions to enqueue and dequeue
 * changed occurences of rte_mempool_ops to const, as they
   contain function pointers (security)
 * split out the default external mempool handler into a separate
   patch for easier review

v5 changes:
 * rebasing, as it is dependent on another patch series [1]

v4 changes (Olivier Matz):
 * remove the rte_mempool_create_ext() function. To change the handler, the
   user has to do the following:
   - mp = rte_mempool_create_empty()
   - rte_mempool_set_handler(mp, "my_handler")
   - rte_mempool_populate_default(mp)
   This avoids to add another function with more than 10 arguments, duplicating
   the doxygen comments
 * change the api of rte_mempool_alloc_t: only the mempool pointer is required
   as all information is available in it
 * change the api of rte_mempool_free_t: remove return value
 * move inline wrapper functions from the .c to the .h (else they won't be
   inlined). This implies to have one header file (rte_mempool.h), or it
   would have generate cross dependencies issues.
 * remove now unused MEMPOOL_F_INT_HANDLER (note: it was misused anyway due
   to the use of && instead of &)
 * fix build in debug mode (__MEMPOOL_STAT_ADD(mp, put_pool, n) remaining)
 * fix build with shared libraries (global handler has to be declared in
   the .map file)
 * rationalize #include order
 * remove unused function rte_mempool_get_handler_name()
 * rename some structures, fields, functions
 * remove the static in front of rte_tailq_elem rte_mempool_tailq (comment
   from Yuanhan)
 * test the ext mempool handler in the same file than standard mempool tests,
   avoiding to duplicate the code
 * rework the custom handler in mempool_test
 * rework a bit the patch selecting default mbuf pool handler
 * fix some doxygen comments

v3 changes:
 * simplified the file layout, renamed to rte_mempool_handler.[hc]
 * moved the default handlers into rte_mempool_default.c
 * moved the example handler out into app/test/test_ext_mempool.c
 * removed is_mc/is_mp change, slight perf degredation on sp cached operation
 * removed stack hanler, may re-introduce at a later date
 * Changes out of code reviews

v2 changes:
 * There was a lot of duplicate code between rte_mempool_xmem_create and
   rte_mempool_create_ext. This has now been refactored and is now
   hopefully cleaner.
 * The RTE_NEXT_ABI define is now used to allow building of the library
   in a format that is compatible with binaries built against previous
   versions of DPDK.
 * Changes out of code reviews. Hopefully I've got most of them included.

The External Mempool Manager is an extension to the mempool API that allows
users to add and use an external mempool manager, which allows external memory
subsystems such as external hardware memory management systems and software
based memory allocators to be used with DPDK.

The existing API to the internal DPDK mempool manager will remain unchanged
and will be backward compatible. However, there will be an ABI breakage, as
the mempool struct is changing. These changes are all contained withing
RTE_NEXT_ABI defs, and the current or next code can be changed with
the CONFIG_RTE_NEXT_ABI config setting

There are two aspects to external mempool manager.
  1. Adding the code for your new mempool operations (ops). This is
     achieved by adding a new mempool ops source file into the
     librte_mempool library, and using the REGISTER_MEMPOOL_OPS macro.
  2. Using the new API to call rte_mempool_create_empty and
     rte_mempool_set_ops_byname to create a new mempool
     using the name parameter to identify which ops to use.

New API calls added
 1. A new rte_mempool_create_empty() function
 2. rte_mempool_set_ops_byname() which sets the mempool's ops (functions)
 3. An rte_mempool_populate_default() and rte_mempool_populate_anon() functions
    which populates the mempool using the relevant ops

Several external mempool managers may be used in the same application. A new
mempool can then be created by using the new rte_mempool_create_empty function,
then calling rte_mempool_set_ops_byname to point the mempool to the relevant
mempool manager callback structure.

Legacy applications will continue to use the old rte_mempool_create API call,
which uses a ring based mempool manager by default. These applications
will need to be modified to use a new external mempool manager.

The external mempool manager needs to provide the following functions.
 1. alloc     - allocates the mempool memory, and adds each object onto a ring
 2. enqueue   - puts an object back into the mempool once an application has
                finished with it
 3. dequeue   - gets an object from the mempool for use by the application
 4. get_count - gets the number of available objects in the mempool
 5. free      - frees the mempool memory

Every time an enqueue/dequeue/get_count is called from the application/PMD,
the callback for that mempool is called. These functions are in the fastpath,
and any unoptimised ops may limit performance.

The new APIs are as follows:

1. rte_mempool_create_empty

struct rte_mempool *
rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
    unsigned cache_size, unsigned private_data_size,
    int socket_id, unsigned flags);

2. rte_mempool_set_ops_byname()

int
rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name
    void *pool_config);

3. rte_mempool_populate_default()

int rte_mempool_populate_default(struct rte_mempool *mp);

4. rte_mempool_populate_anon()

int rte_mempool_populate_anon(struct rte_mempool *mp);

Please see rte_mempool.h for further information on the parameters.


The important thing to note is that the mempool ops struct is passed by name
to rte_mempool_set_ops_byname, which looks through the ops struct array to
get the ops_index, which is then stored in the rte_memool structure. This
allow multiple processes to use the same mempool, as the function pointers
are accessed via ops index.

The mempool ops structure contains callbacks to the implementation of
the ops function, and is set up for registration as follows:

static const struct rte_mempool_ops ops_sp_mc = {
    .name = "ring_sp_mc",
    .alloc = rte_mempool_common_ring_alloc,
    .enqueue = common_ring_sp_enqueue,
    .dequeue = common_ring_mc_dequeue,
    .get_count = common_ring_get_count,
    .free = common_ring_free,
};

And then the following macro will register the ops in the array of ops
structures

REGISTER_MEMPOOL_OPS(ops_mp_mc);

For an example of API usage, please see app/test/test_mempool.c, which
implements a rudimentary "custom_handler" mempool manager using simple mallocs
for each mempool object. This file also contains the callbacks and self
registration for the new handler.

David Hunt (2):
  mempool: support external mempool operations
  mbuf: make default mempool ops configurable at build

Olivier Matz (1):
  app/test: test external mempool handler

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v13 1/3] mempool: support external mempool operations
  2016-06-16 12:30                       ` [PATCH v13 " David Hunt
@ 2016-06-16 12:30                         ` David Hunt
  2016-06-17  6:58                           ` Hunt, David
  2016-06-17 10:18                           ` Olivier Matz
  2016-06-16 12:30                         ` [PATCH v13 2/3] app/test: test external mempool manager David Hunt
                                           ` (2 subsequent siblings)
  3 siblings, 2 replies; 238+ messages in thread
From: David Hunt @ 2016-06-16 12:30 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain, David Hunt

Until now, the objects stored in a mempool were internally stored in a
ring. This patch introduces the possibility to register external handlers
replacing the ring.

The default behavior remains unchanged, but calling the new function
rte_mempool_set_ops_byname() right after rte_mempool_create_empty() allows
the user to change the handler that will be used when populating
the mempool.

This patch also adds a set of default ops (function callbacks) based
on rte_ring.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test/test_mempool_perf.c               |   1 -
 doc/guides/prog_guide/mempool_lib.rst      |  31 +++-
 doc/guides/rel_notes/deprecation.rst       |   9 -
 lib/librte_mempool/Makefile                |   2 +
 lib/librte_mempool/rte_mempool.c           |  66 +++-----
 lib/librte_mempool/rte_mempool.h           | 253 ++++++++++++++++++++++++++---
 lib/librte_mempool/rte_mempool_ops.c       | 150 +++++++++++++++++
 lib/librte_mempool/rte_mempool_ring.c      | 161 ++++++++++++++++++
 lib/librte_mempool/rte_mempool_version.map |  13 +-
 9 files changed, 605 insertions(+), 81 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_ops.c
 create mode 100644 lib/librte_mempool/rte_mempool_ring.c

diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index c5e3576..c5f8455 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
 							   n_get_bulk);
 				if (unlikely(ret < 0)) {
 					rte_mempool_dump(stdout, mp);
-					rte_ring_dump(stdout, mp->ring);
 					/* in this case, objects are lost... */
 					return -1;
 				}
diff --git a/doc/guides/prog_guide/mempool_lib.rst b/doc/guides/prog_guide/mempool_lib.rst
index c3afc2e..2e3116e 100644
--- a/doc/guides/prog_guide/mempool_lib.rst
+++ b/doc/guides/prog_guide/mempool_lib.rst
@@ -34,7 +34,7 @@ Mempool Library
 ===============
 
 A memory pool is an allocator of a fixed-sized object.
-In the DPDK, it is identified by name and uses a ring to store free objects.
+In the DPDK, it is identified by name and uses a ring or an external mempool manager to store free objects.
 It provides some other optional services such as a per-core object cache and
 an alignment helper to ensure that objects are padded to spread them equally on all DRAM or DDR3 channels.
 
@@ -127,6 +127,35 @@ The maximum size of the cache is static and is defined at compilation time (CONF
    A mempool in Memory with its Associated Ring
 
 
+External Mempool Manager
+------------------------
+
+This allows external memory subsystems, such as external hardware memory
+management systems and software based memory allocators, to be used with DPDK.
+
+There are two aspects to external mempool manager.
+
+* Adding the code for your new mempool operations (ops). This is achieved by
+  adding a new mempool ops code, and using the ``REGISTER_MEMPOOL_OPS`` macro.
+
+* Using the new API to call ``rte_mempool_create_empty()`` and
+  ``rte_mempool_set_ops_byname()`` to create a new mempool and specifying which
+  ops to use.
+
+Several external mempool managers may be used in the same application. A new
+mempool can be created by using the ``rte_mempool_create_empty()`` function,
+then using ``rte_mempool_set_ops_byname()`` to point the mempool to the
+relevant mempool manager callback (ops) structure.
+
+Legacy applications may continue to use the old ``rte_mempool_create()`` API
+call, which uses a ring based mempool manager by default. These applications
+will need to be modified to use a new external mempool manager.
+
+For applications that use ``rte_pktmbuf_create()``, there is a config setting
+(``RTE_MBUF_DEFAULT_MEMPOOL_OPS``) that allows the application to make use of
+an external mempool manager.
+
+
 Use Cases
 ---------
 
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 7d947ae..c415095 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -39,15 +39,6 @@ Deprecation Notices
   compact API. The ones that remain are backwards compatible and use the
   per-lcore default cache if available. This change targets release 16.07.
 
-* The rte_mempool struct will be changed in 16.07 to facilitate the new
-  external mempool manager functionality.
-  The ring element will be replaced with a more generic 'pool' opaque pointer
-  to allow new mempool handlers to use their own user-defined mempool
-  layout. Also newly added to rte_mempool is a handler index.
-  The existing API will be backward compatible, but there will be new API
-  functions added to facilitate the creation of mempools using an external
-  handler. The 16.07 release will contain these changes.
-
 * A librte_vhost public structures refactor is planned for DPDK 16.07
   that requires both ABI and API change.
   The proposed refactor would expose DPDK vhost dev to applications as
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 43423e0..a4c089e 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -42,6 +42,8 @@ LIBABIVER := 2
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ring.c
 # install includes
 SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
 
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 22a5645..0fb84ad 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -148,7 +148,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, phys_addr_t physaddr)
 #endif
 
 	/* enqueue in ring */
-	rte_ring_sp_enqueue(mp->ring, obj);
+	rte_mempool_ops_enqueue_bulk(mp, &obj, 1);
 }
 
 /* call obj_cb() for each mempool element */
@@ -303,40 +303,6 @@ rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	return (size_t)paddr_idx << pg_shift;
 }
 
-/* create the internal ring */
-static int
-rte_mempool_ring_create(struct rte_mempool *mp)
-{
-	int rg_flags = 0, ret;
-	char rg_name[RTE_RING_NAMESIZE];
-	struct rte_ring *r;
-
-	ret = snprintf(rg_name, sizeof(rg_name),
-		RTE_MEMPOOL_MZ_FORMAT, mp->name);
-	if (ret < 0 || ret >= (int)sizeof(rg_name))
-		return -ENAMETOOLONG;
-
-	/* ring flags */
-	if (mp->flags & MEMPOOL_F_SP_PUT)
-		rg_flags |= RING_F_SP_ENQ;
-	if (mp->flags & MEMPOOL_F_SC_GET)
-		rg_flags |= RING_F_SC_DEQ;
-
-	/* Allocate the ring that will be used to store objects.
-	 * Ring functions will return appropriate errors if we are
-	 * running as a secondary process etc., so no checks made
-	 * in this function for that condition.
-	 */
-	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
-		mp->socket_id, rg_flags);
-	if (r == NULL)
-		return -rte_errno;
-
-	mp->ring = r;
-	mp->flags |= MEMPOOL_F_RING_CREATED;
-	return 0;
-}
-
 /* free a memchunk allocated with rte_memzone_reserve() */
 static void
 rte_mempool_memchunk_mz_free(__rte_unused struct rte_mempool_memhdr *memhdr,
@@ -354,7 +320,7 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
 	void *elt;
 
 	while (!STAILQ_EMPTY(&mp->elt_list)) {
-		rte_ring_sc_dequeue(mp->ring, &elt);
+		rte_mempool_ops_dequeue_bulk(mp, &elt, 1);
 		(void)elt;
 		STAILQ_REMOVE_HEAD(&mp->elt_list, next);
 		mp->populated_size--;
@@ -386,9 +352,9 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	int ret;
 
 	/* create the internal ring if not already done */
-	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
-		ret = rte_mempool_ring_create(mp);
-		if (ret < 0)
+	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
+		ret = rte_mempool_ops_alloc(mp);
+		if (ret != 0)
 			return ret;
 	}
 
@@ -703,7 +669,7 @@ rte_mempool_free(struct rte_mempool *mp)
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
 
 	rte_mempool_free_memchunks(mp);
-	rte_ring_free(mp->ring);
+	rte_mempool_ops_free(mp);
 	rte_memzone_free(mp->mz);
 }
 
@@ -815,6 +781,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 		RTE_PTR_ADD(mp, MEMPOOL_HEADER_SIZE(mp, 0));
 
 	te->data = mp;
+
 	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
 	TAILQ_INSERT_TAIL(mempool_list, te, next);
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
@@ -844,6 +811,19 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
 	if (mp == NULL)
 		return NULL;
 
+	/*
+	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
+	 * set the correct index into the table of ops structs.
+	 */
+	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
+		rte_mempool_set_ops_byname(mp, "ring_sp_sc", NULL);
+	else if (flags & MEMPOOL_F_SP_PUT)
+		rte_mempool_set_ops_byname(mp, "ring_sp_mc", NULL);
+	else if (flags & MEMPOOL_F_SC_GET)
+		rte_mempool_set_ops_byname(mp, "ring_mp_sc", NULL);
+	else
+		rte_mempool_set_ops_byname(mp, "ring_mp_mc", NULL);
+
 	/* call the mempool priv initializer */
 	if (mp_init)
 		mp_init(mp, mp_init_arg);
@@ -930,7 +910,7 @@ rte_mempool_count(const struct rte_mempool *mp)
 	unsigned count;
 	unsigned lcore_id;
 
-	count = rte_ring_count(mp->ring);
+	count = rte_mempool_ops_get_count(mp);
 
 	if (mp->cache_size == 0)
 		return count;
@@ -1119,7 +1099,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 
 	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
 	fprintf(f, "  flags=%x\n", mp->flags);
-	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
+	fprintf(f, "  pool=%p\n", mp->pool_data);
 	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->mz->phys_addr);
 	fprintf(f, "  nb_mem_chunks=%u\n", mp->nb_mem_chunks);
 	fprintf(f, "  size=%"PRIu32"\n", mp->size);
@@ -1140,7 +1120,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 	}
 
 	cache_count = rte_mempool_dump_cache(f, mp);
-	common_count = rte_ring_count(mp->ring);
+	common_count = rte_mempool_ops_get_count(mp);
 	if ((cache_count + common_count) > mp->size)
 		common_count = mp->size - cache_count;
 	fprintf(f, "  common_pool_count=%u\n", common_count);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 60339bd..a763fb5 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -67,6 +67,7 @@
 #include <inttypes.h>
 #include <sys/queue.h>
 
+#include <rte_spinlock.h>
 #include <rte_log.h>
 #include <rte_debug.h>
 #include <rte_lcore.h>
@@ -203,10 +204,14 @@ struct rte_mempool_memhdr {
  */
 struct rte_mempool {
 	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
-	struct rte_ring *ring;           /**< Ring to store objects. */
-	const struct rte_memzone *mz;    /**< Memzone where pool is allocated */
+	union {
+		void *pool_data;         /**< Ring or pool to store objects. */
+		uint64_t pool_id;        /**< External mempool identifier. */
+	};
+	void *pool_config;               /**< optional args for ops alloc. */
+	const struct rte_memzone *mz;    /**< Memzone where pool is alloc'd. */
 	int flags;                       /**< Flags of the mempool. */
-	int socket_id;                   /**< Socket id passed at mempool creation. */
+	int socket_id;                   /**< Socket id passed at create. */
 	uint32_t size;                   /**< Max size of the mempool. */
 	uint32_t cache_size;             /**< Size of per-lcore local cache. */
 	uint32_t cache_flushthresh;
@@ -217,6 +222,14 @@ struct rte_mempool {
 	uint32_t trailer_size;           /**< Size of trailer (after elt). */
 
 	unsigned private_data_size;      /**< Size of private data. */
+	/**
+	 * Index into rte_mempool_ops_table array of mempool ops
+	 * structs, which contain callback function pointers.
+	 * We're using an index here rather than pointers to the callbacks
+	 * to facilitate any secondary processes that may want to use
+	 * this mempool.
+	 */
+	int32_t ops_index;
 
 	struct rte_mempool_cache *local_cache; /**< Per-lcore local cache */
 
@@ -235,7 +248,7 @@ struct rte_mempool {
 #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
 #define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
-#define MEMPOOL_F_RING_CREATED   0x0010 /**< Internal: ring is created */
+#define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
 
 /**
@@ -325,6 +338,213 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 #define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
 #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
 
+#define RTE_MEMPOOL_OPS_NAMESIZE 32 /**< Max length of ops struct name. */
+
+/**
+ * Prototype for implementation specific data provisioning function.
+ *
+ * The function should provide the implementation specific memory for
+ * for use by the other mempool ops functions in a given mempool ops struct.
+ * E.g. the default ops provides an instance of the rte_ring for this purpose.
+ * it will most likely point to a different type of data structure, and
+ * will be transparent to the application programmer.
+ * This function should set mp->pool_data.
+ */
+typedef int (*rte_mempool_alloc_t)(struct rte_mempool *mp);
+
+/**
+ * Free the opaque private data pointed to by mp->pool_data pointer.
+ */
+typedef void (*rte_mempool_free_t)(struct rte_mempool *mp);
+
+/**
+ * Enqueue an object into the external pool.
+ */
+typedef int (*rte_mempool_enqueue_t)(struct rte_mempool *mp,
+		void * const *obj_table, unsigned int n);
+
+/**
+ * Dequeue an object from the external pool.
+ */
+typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
+		void **obj_table, unsigned int n);
+
+/**
+ * Return the number of available objects in the external pool.
+ */
+typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
+
+/** Structure defining mempool operations structure */
+struct rte_mempool_ops {
+	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
+	rte_mempool_alloc_t alloc;       /**< Allocate private data. */
+	rte_mempool_free_t free;         /**< Free the external pool. */
+	rte_mempool_enqueue_t enqueue;   /**< Enqueue an object. */
+	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
+	rte_mempool_get_count get_count; /**< Get qty of available objs. */
+} __rte_cache_aligned;
+
+#define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
+
+/**
+ * Structure storing the table of registered ops structs, each of which contain
+ * the function pointers for the mempool ops functions.
+ * Each process has its own storage for this ops struct array so that
+ * the mempools can be shared across primary and secondary processes.
+ * The indices used to access the array are valid across processes, whereas
+ * any function pointers stored directly in the mempool struct would not be.
+ * This results in us simply having "ops_index" in the mempool struct.
+ */
+struct rte_mempool_ops_table {
+	rte_spinlock_t sl;     /**< Spinlock for add/delete. */
+	uint32_t num_ops;      /**< Number of used ops structs in the table. */
+	/**
+	 * Storage for all possible ops structs.
+	 */
+	struct rte_mempool_ops ops[RTE_MEMPOOL_MAX_OPS_IDX];
+} __rte_cache_aligned;
+
+/** Array of registered ops structs. */
+extern struct rte_mempool_ops_table rte_mempool_ops_table;
+
+/**
+ * @internal Get the mempool ops struct from its index.
+ *
+ * @param ops_index
+ *   The index of the ops struct in the ops struct table. It must be a valid
+ *   index: (0 <= idx < num_ops).
+ * @return
+ *   The pointer to the ops struct in the table.
+ */
+static inline struct rte_mempool_ops *
+rte_mempool_ops_get(int ops_index)
+{
+	RTE_VERIFY(ops_index < RTE_MEMPOOL_MAX_OPS_IDX);
+
+	return &rte_mempool_ops_table.ops[ops_index];
+}
+
+/**
+ * @internal Wrapper for mempool_ops alloc callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   - 0: Success; successfully allocated mempool pool_data.
+ *   - <0: Error; code of alloc function.
+ */
+int
+rte_mempool_ops_alloc(struct rte_mempool *mp);
+
+/**
+ * @internal Wrapper for mempool_ops get callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to get.
+ * @return
+ *   - 0: Success; got n objects.
+ *   - <0: Error; code of get function.
+ */
+static inline int
+rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
+		void **obj_table, unsigned n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->dequeue(mp, obj_table, n);
+}
+
+/**
+ * @internal wrapper for mempool_ops put callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to put.
+ * @return
+ *   - 0: Success; n objects supplied.
+ *   - <0: Error; code of put function.
+ */
+static inline int
+rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->enqueue(mp, obj_table, n);
+}
+
+/**
+ * @internal wrapper for mempool_ops get_count callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   The number of available objects in the external pool.
+ */
+unsigned
+rte_mempool_ops_get_count(const struct rte_mempool *mp);
+
+/**
+ * @internal wrapper for mempool_ops free callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ */
+void
+rte_mempool_ops_free(struct rte_mempool *mp);
+
+/**
+ * Set the ops of a mempool.
+ *
+ * This can only be done on a mempool that is not populated, i.e. just after
+ * a call to rte_mempool_create_empty().
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param name
+ *   Name of the ops structure to use for this mempool.
+ * @return
+ *   - 0: Success; the mempool is now using the requested ops functions.
+ *   - -EINVAL - Invalid ops struct name provided.
+ *   - -EEXIST - mempool already has an ops struct assigned.
+ */
+int
+rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
+		void *pool_config);
+
+/**
+ * Register mempool operations.
+ *
+ * @param ops
+ *   Pointer to an ops structure to register.
+ * @return
+ *   - >=0: Success; return the index of the ops struct in the table.
+ *   - -EINVAL - some missing callbacks while registering ops struct.
+ *   - -ENOSPC - the maximum number of ops structs has been reached.
+ */
+int rte_mempool_ops_register(const struct rte_mempool_ops *ops);
+
+/**
+ * Macro to statically register the ops of an external mempool manager.
+ * Note that the rte_mempool_ops_register fails silently here when
+ * more then RTE_MEMPOOL_MAX_OPS_IDX is registered.
+ */
+#define MEMPOOL_REGISTER_OPS(ops)					\
+	void mp_hdlr_init_##ops(void);					\
+	void __attribute__((constructor, used)) mp_hdlr_init_##ops(void)\
+	{								\
+		rte_mempool_ops_register(&ops);			\
+	}
+
 /**
  * An object callback function for mempool.
  *
@@ -774,7 +994,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 	cache->len += n;
 
 	if (cache->len >= flushthresh) {
-		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
+		rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache_size],
 				cache->len - cache_size);
 		cache->len = cache_size;
 	}
@@ -785,19 +1005,10 @@ ring_enqueue:
 
 	/* push remaining objects in ring */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	if (is_mp) {
-		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
-	else {
-		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
+	if (rte_mempool_ops_enqueue_bulk(mp, obj_table, n) < 0)
+		rte_panic("cannot put objects in mempool\n");
 #else
-	if (is_mp)
-		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
-	else
-		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
+	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);
 #endif
 }
 
@@ -945,7 +1156,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 		uint32_t req = n + (cache_size - cache->len);
 
 		/* How many do we require i.e. number to fill the cache + the request */
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
+		ret = rte_mempool_ops_dequeue_bulk(mp,
+			&cache->objs[cache->len], req);
 		if (unlikely(ret < 0)) {
 			/*
 			 * In the offchance that we are buffer constrained,
@@ -972,10 +1184,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 ring_dequeue:
 
 	/* get remaining objects from ring */
-	if (is_mc)
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
-	else
-		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
+	ret = rte_mempool_ops_dequeue_bulk(mp, obj_table, n);
 
 	if (ret < 0)
 		__MEMPOOL_STAT_ADD(mp, get_fail, n);
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
new file mode 100644
index 0000000..7977a14
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -0,0 +1,150 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2016 6WIND S.A.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_mempool.h>
+#include <rte_errno.h>
+
+/* indirect jump table to support external memory pools. */
+struct rte_mempool_ops_table rte_mempool_ops_table = {
+	.sl =  RTE_SPINLOCK_INITIALIZER,
+	.num_ops = 0
+};
+
+/* add a new ops struct in rte_mempool_ops_table, return its index. */
+int
+rte_mempool_ops_register(const struct rte_mempool_ops *h)
+{
+	struct rte_mempool_ops *ops;
+	int16_t ops_index;
+
+	rte_spinlock_lock(&rte_mempool_ops_table.sl);
+
+	if (rte_mempool_ops_table.num_ops >=
+			RTE_MEMPOOL_MAX_OPS_IDX) {
+		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Maximum number of mempool ops structs exceeded\n");
+		return -ENOSPC;
+	}
+
+	if (h->alloc == NULL || h->enqueue == NULL ||
+			h->dequeue == NULL || h->get_count == NULL) {
+		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Missing callback while registering mempool ops\n");
+		return -EINVAL;
+	}
+
+	if (strlen(h->name) >= sizeof(ops->name) - 1) {
+		RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n",
+				__func__, h->name);
+		rte_errno = EEXIST;
+		return -EEXIST;
+	}
+
+	ops_index = rte_mempool_ops_table.num_ops++;
+	ops = &rte_mempool_ops_table.ops[ops_index];
+	snprintf(ops->name, sizeof(ops->name), "%s", h->name);
+	ops->alloc = h->alloc;
+	ops->enqueue = h->enqueue;
+	ops->dequeue = h->dequeue;
+	ops->get_count = h->get_count;
+
+	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+
+	return ops_index;
+}
+
+/* wrapper to allocate an external mempool's private (pool) data. */
+int
+rte_mempool_ops_alloc(struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->alloc(mp);
+}
+
+/* wrapper to free an external pool ops. */
+void
+rte_mempool_ops_free(struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	if (ops->free == NULL)
+		return;
+	return ops->free(mp);
+}
+
+/* wrapper to get available objects in an external mempool. */
+unsigned int
+rte_mempool_ops_get_count(const struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->get_count(mp);
+}
+
+/* sets mempool ops previously registered by rte_mempool_ops_register. */
+int
+rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
+	void *pool_config)
+{
+	struct rte_mempool_ops *ops = NULL;
+	unsigned i;
+
+	/* too late, the mempool is already populated. */
+	if (mp->flags & MEMPOOL_F_POOL_CREATED)
+		return -EEXIST;
+
+	for (i = 0; i < rte_mempool_ops_table.num_ops; i++) {
+		if (!strcmp(name,
+				rte_mempool_ops_table.ops[i].name)) {
+			ops = &rte_mempool_ops_table.ops[i];
+			break;
+		}
+	}
+
+	if (ops == NULL)
+		return -EINVAL;
+
+	mp->ops_index = i;
+	mp->pool_config = pool_config;
+	return 0;
+}
diff --git a/lib/librte_mempool/rte_mempool_ring.c b/lib/librte_mempool/rte_mempool_ring.c
new file mode 100644
index 0000000..626786e
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_ring.c
@@ -0,0 +1,161 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+
+static int
+common_ring_mp_enqueue(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	return rte_ring_mp_enqueue_bulk(mp->pool_data, obj_table, n);
+}
+
+static int
+common_ring_sp_enqueue(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	return rte_ring_sp_enqueue_bulk(mp->pool_data, obj_table, n);
+}
+
+static int
+common_ring_mc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	return rte_ring_mc_dequeue_bulk(mp->pool_data, obj_table, n);
+}
+
+static int
+common_ring_sc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	return rte_ring_sc_dequeue_bulk(mp->pool_data, obj_table, n);
+}
+
+static unsigned
+common_ring_get_count(const struct rte_mempool *mp)
+{
+	return rte_ring_count(mp->pool_data);
+}
+
+
+static int
+common_ring_alloc(struct rte_mempool *mp)
+{
+	int rg_flags = 0, ret;
+	char rg_name[RTE_RING_NAMESIZE];
+	struct rte_ring *r;
+
+	ret = snprintf(rg_name, sizeof(rg_name),
+		RTE_MEMPOOL_MZ_FORMAT, mp->name);
+	if (ret < 0 || ret >= (int)sizeof(rg_name)) {
+		rte_errno = ENAMETOOLONG;
+		return -rte_errno;
+	}
+
+	/* ring flags */
+	if (mp->flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (mp->flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	/*
+	 * Allocate the ring that will be used to store objects.
+	 * Ring functions will return appropriate errors if we are
+	 * running as a secondary process etc., so no checks made
+	 * in this function for that condition.
+	 */
+	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+		mp->socket_id, rg_flags);
+	if (r == NULL)
+		return -rte_errno;
+
+	mp->pool_data = r;
+
+	return 0;
+}
+
+static void
+common_ring_free(struct rte_mempool *mp)
+{
+	rte_ring_free(mp->pool_data);
+}
+
+/*
+ * The following 4 declarations of mempool ops structs address
+ * the need for the backward compatible mempool managers for
+ * single/multi producers and single/multi consumers as dictated by the
+ * flags provided to the rte_mempool_create function
+ */
+static const struct rte_mempool_ops ops_mp_mc = {
+	.name = "ring_mp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_mp_enqueue,
+	.dequeue = common_ring_mc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_sp_sc = {
+	.name = "ring_sp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_sp_enqueue,
+	.dequeue = common_ring_sc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_mp_sc = {
+	.name = "ring_mp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_mp_enqueue,
+	.dequeue = common_ring_sc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_sp_mc = {
+	.name = "ring_sp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_sp_enqueue,
+	.dequeue = common_ring_mc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(ops_mp_mc);
+MEMPOOL_REGISTER_OPS(ops_sp_sc);
+MEMPOOL_REGISTER_OPS(ops_mp_sc);
+MEMPOOL_REGISTER_OPS(ops_sp_mc);
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index f63461b..6209ec2 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -20,15 +20,18 @@ DPDK_16.7 {
 	global:
 
 	rte_mempool_check_cookies;
-	rte_mempool_obj_iter;
-	rte_mempool_mem_iter;
 	rte_mempool_create_empty;
+	rte_mempool_free;
+	rte_mempool_mem_iter;
+	rte_mempool_obj_iter;
+	rte_mempool_ops_register;
+	rte_mempool_ops_table;
+	rte_mempool_populate_anon;
+	rte_mempool_populate_default;
 	rte_mempool_populate_phys;
 	rte_mempool_populate_phys_tab;
 	rte_mempool_populate_virt;
-	rte_mempool_populate_default;
-	rte_mempool_populate_anon;
-	rte_mempool_free;
+	rte_mempool_set_ops_byname;
 
 	local: *;
 } DPDK_2.0;
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v13 2/3] app/test: test external mempool manager
  2016-06-16 12:30                       ` [PATCH v13 " David Hunt
  2016-06-16 12:30                         ` [PATCH v13 1/3] mempool: support external mempool operations David Hunt
@ 2016-06-16 12:30                         ` David Hunt
  2016-06-16 12:30                         ` [PATCH v13 3/3] mbuf: make default mempool ops configurable at build David Hunt
  2016-06-17 13:53                         ` [PATCH v14 0/3] mempool: add mempool handler feature David Hunt
  3 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-06-16 12:30 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain, David Hunt

Use a minimal custom mempool external ops and check that it also
passes basic mempool autotests.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test/test_mempool.c | 122 +++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 120 insertions(+), 2 deletions(-)

diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index b586249..31582d8 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -83,6 +83,99 @@
 static rte_atomic32_t synchro;
 
 /*
+ * Simple example of custom mempool structure. Holds pointers to all the
+ * elements which are simply malloc'd in this example.
+ */
+struct custom_mempool {
+	rte_spinlock_t lock;
+	unsigned count;
+	unsigned size;
+	void *elts[];
+};
+
+/*
+ * Loop through all the element pointers and allocate a chunk of memory, then
+ * insert that memory into the ring.
+ */
+static int
+custom_mempool_alloc(struct rte_mempool *mp)
+{
+	struct custom_mempool *cm;
+
+	cm = rte_zmalloc("custom_mempool",
+		sizeof(struct custom_mempool) + mp->size * sizeof(void *), 0);
+	if (cm == NULL)
+		return -ENOMEM;
+
+	rte_spinlock_init(&cm->lock);
+	cm->count = 0;
+	cm->size = mp->size;
+	mp->pool_data = cm;
+	return 0;
+}
+
+static void
+custom_mempool_free(struct rte_mempool *mp)
+{
+	rte_free((void *)(mp->pool_data));
+}
+
+static int
+custom_mempool_enqueue(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (cm->count + n > cm->size) {
+		ret = -ENOBUFS;
+	} else {
+		memcpy(&cm->elts[cm->count], obj_table, sizeof(void *) * n);
+		cm->count += n;
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+
+static int
+custom_mempool_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (n > cm->count) {
+		ret = -ENOENT;
+	} else {
+		cm->count -= n;
+		memcpy(obj_table, &cm->elts[cm->count], sizeof(void *) * n);
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+static unsigned
+custom_mempool_get_count(const struct rte_mempool *mp)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+
+	return cm->count;
+}
+
+static struct rte_mempool_ops mempool_ops_custom = {
+	.name = "custom_handler",
+	.alloc = custom_mempool_alloc,
+	.free = custom_mempool_free,
+	.enqueue = custom_mempool_enqueue,
+	.dequeue = custom_mempool_dequeue,
+	.get_count = custom_mempool_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(mempool_ops_custom);
+
+/*
  * save the object number in the first 4 bytes of object data. All
  * other bytes are set to 0.
  */
@@ -292,12 +385,14 @@ static int test_mempool_single_consumer(void)
  * test function for mempool test based on singple consumer and single producer,
  * can run on one lcore only
  */
-static int test_mempool_launch_single_consumer(__attribute__((unused)) void *arg)
+static int
+test_mempool_launch_single_consumer(__attribute__((unused)) void *arg)
 {
 	return test_mempool_single_consumer();
 }
 
-static void my_mp_init(struct rte_mempool * mp, __attribute__((unused)) void * arg)
+static void
+my_mp_init(struct rte_mempool *mp, __attribute__((unused)) void *arg)
 {
 	printf("mempool name is %s\n", mp->name);
 	/* nothing to be implemented here*/
@@ -477,6 +572,7 @@ test_mempool(void)
 {
 	struct rte_mempool *mp_cache = NULL;
 	struct rte_mempool *mp_nocache = NULL;
+	struct rte_mempool *mp_ext = NULL;
 
 	rte_atomic32_init(&synchro);
 
@@ -505,6 +601,27 @@ test_mempool(void)
 		goto err;
 	}
 
+	/* create a mempool with an external handler */
+	mp_ext = rte_mempool_create_empty("test_ext",
+		MEMPOOL_SIZE,
+		MEMPOOL_ELT_SIZE,
+		RTE_MEMPOOL_CACHE_MAX_SIZE, 0,
+		SOCKET_ID_ANY, 0);
+
+	if (mp_ext == NULL) {
+		printf("cannot allocate mp_ext mempool\n");
+		goto err;
+	}
+	if (rte_mempool_set_ops_byname(mp_ext, "custom_handler", NULL) < 0) {
+		printf("cannot set custom handler\n");
+		goto err;
+	}
+	if (rte_mempool_populate_default(mp_ext) < 0) {
+		printf("cannot populate mp_ext mempool\n");
+		goto err;
+	}
+	rte_mempool_obj_iter(mp_ext, my_obj_init, NULL);
+
 	/* retrieve the mempool from its name */
 	if (rte_mempool_lookup("test_nocache") != mp_nocache) {
 		printf("Cannot lookup mempool from its name\n");
@@ -545,6 +662,7 @@ test_mempool(void)
 err:
 	rte_mempool_free(mp_nocache);
 	rte_mempool_free(mp_cache);
+	rte_mempool_free(mp_ext);
 	return -1;
 }
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v13 3/3] mbuf: make default mempool ops configurable at build
  2016-06-16 12:30                       ` [PATCH v13 " David Hunt
  2016-06-16 12:30                         ` [PATCH v13 1/3] mempool: support external mempool operations David Hunt
  2016-06-16 12:30                         ` [PATCH v13 2/3] app/test: test external mempool manager David Hunt
@ 2016-06-16 12:30                         ` David Hunt
  2016-06-17 13:53                         ` [PATCH v14 0/3] mempool: add mempool handler feature David Hunt
  3 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-06-16 12:30 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain, David Hunt

By default, the mempool ops used for mbuf allocations is a multi
producer and multi consumer ring. We could imagine a target (maybe some
network processors?) that provides an hardware-assisted pool
mechanism. In this case, the default configuration for this architecture
would contain a different value for RTE_MBUF_DEFAULT_MEMPOOL_OPS.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 config/common_base         |  1 +
 lib/librte_mbuf/rte_mbuf.c | 26 ++++++++++++++++++++++----
 2 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/config/common_base b/config/common_base
index b9ba405..13ad4dd 100644
--- a/config/common_base
+++ b/config/common_base
@@ -394,6 +394,7 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
 #
 CONFIG_RTE_LIBRTE_MBUF=y
 CONFIG_RTE_LIBRTE_MBUF_DEBUG=n
+CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="ring_mp_mc"
 CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index eec1456..e72eb6b 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -153,6 +153,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	unsigned cache_size, uint16_t priv_size, uint16_t data_room_size,
 	int socket_id)
 {
+	struct rte_mempool *mp;
 	struct rte_pktmbuf_pool_private mbp_priv;
 	unsigned elt_size;
 
@@ -167,10 +168,27 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	mbp_priv.mbuf_data_room_size = data_room_size;
 	mbp_priv.mbuf_priv_size = priv_size;
 
-	return rte_mempool_create(name, n, elt_size,
-		cache_size, sizeof(struct rte_pktmbuf_pool_private),
-		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
-		socket_id, 0);
+	mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
+		 sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
+	if (mp == NULL)
+		return NULL;
+
+	rte_errno = rte_mempool_set_ops_byname(mp,
+			RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL);
+	if (rte_errno != 0) {
+		RTE_LOG(ERR, MBUF, "error setting mempool handler\n");
+		return NULL;
+	}
+	rte_pktmbuf_pool_init(mp, &mbp_priv);
+
+	if (rte_mempool_populate_default(mp) < 0) {
+		rte_mempool_free(mp);
+		return NULL;
+	}
+
+	rte_mempool_obj_iter(mp, rte_pktmbuf_init, NULL);
+
+	return mp;
 }
 
 /* do some sanity checks on a mbuf: panic if it fails */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* Re: [PATCH v13 1/3] mempool: support external mempool operations
  2016-06-16 12:30                         ` [PATCH v13 1/3] mempool: support external mempool operations David Hunt
@ 2016-06-17  6:58                           ` Hunt, David
  2016-06-17  8:08                             ` Olivier Matz
  2016-06-17 10:18                           ` Olivier Matz
  1 sibling, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-06-17  6:58 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain

A comment below:

On 16/6/2016 1:30 PM, David Hunt wrote:
> +/**
> + * Set the ops of a mempool.
> + *
> + * This can only be done on a mempool that is not populated, i.e. just after
> + * a call to rte_mempool_create_empty().
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @param name
> + *   Name of the ops structure to use for this mempool.
+ * @param pool_config
+ *   Opaque data that can be used by the ops functions.
> + * @return
> + *   - 0: Success; the mempool is now using the requested ops functions.
> + *   - -EINVAL - Invalid ops struct name provided.
> + *   - -EEXIST - mempool already has an ops struct assigned.
> + */
> +int
> +rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
> +		void *pool_config);
> +

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v13 1/3] mempool: support external mempool operations
  2016-06-17  6:58                           ` Hunt, David
@ 2016-06-17  8:08                             ` Olivier Matz
  2016-06-17  8:42                               ` Hunt, David
  0 siblings, 1 reply; 238+ messages in thread
From: Olivier Matz @ 2016-06-17  8:08 UTC (permalink / raw)
  To: Hunt, David, dev; +Cc: viktorin, jerin.jacob, shreyansh.jain

Hi David,

On 06/17/2016 08:58 AM, Hunt, David wrote:
> A comment below:
> 
> On 16/6/2016 1:30 PM, David Hunt wrote:
>> +/**
>> + * Set the ops of a mempool.
>> + *
>> + * This can only be done on a mempool that is not populated, i.e.
>> just after
>> + * a call to rte_mempool_create_empty().
>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + * @param name
>> + *   Name of the ops structure to use for this mempool.
> + * @param pool_config
> + *   Opaque data that can be used by the ops functions.
>> + * @return
>> + *   - 0: Success; the mempool is now using the requested ops functions.
>> + *   - -EINVAL - Invalid ops struct name provided.
>> + *   - -EEXIST - mempool already has an ops struct assigned.
>> + */
>> +int
>> +rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
>> +        void *pool_config);
>> +
> 
> 

The changes related to the pool_config look good to me.

If you plan to do a v14 for this API comment, I'm wondering if the
documentation could be slightly modified too. I think "external mempool
manager" was the legacy name for the feature, but maybe it could be
changed in "alternative mempool handlers" or "changing the mempool
handler". I mean the word "external" is probably not appropriate now,
especially if we add other handlers in the mempool lib.

My 2 cents,
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v13 1/3] mempool: support external mempool operations
  2016-06-17  8:08                             ` Olivier Matz
@ 2016-06-17  8:42                               ` Hunt, David
  2016-06-17  9:09                                 ` Thomas Monjalon
  0 siblings, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-06-17  8:42 UTC (permalink / raw)
  To: Olivier Matz, dev; +Cc: viktorin, jerin.jacob, shreyansh.jain



On 17/6/2016 9:08 AM, Olivier Matz wrote:
> Hi David,
>
> On 06/17/2016 08:58 AM, Hunt, David wrote:
>> A comment below:
>>
>> On 16/6/2016 1:30 PM, David Hunt wrote:
>>> +/**
>>> + * Set the ops of a mempool.
>>> + *
>>> + * This can only be done on a mempool that is not populated, i.e.
>>> just after
>>> + * a call to rte_mempool_create_empty().
>>> + *
>>> + * @param mp
>>> + *   Pointer to the memory pool.
>>> + * @param name
>>> + *   Name of the ops structure to use for this mempool.
>> + * @param pool_config
>> + *   Opaque data that can be used by the ops functions.
>>> + * @return
>>> + *   - 0: Success; the mempool is now using the requested ops functions.
>>> + *   - -EINVAL - Invalid ops struct name provided.
>>> + *   - -EEXIST - mempool already has an ops struct assigned.
>>> + */
>>> +int
>>> +rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
>>> +        void *pool_config);
>>> +
>>
> The changes related to the pool_config look good to me.
>
> If you plan to do a v14 for this API comment, I'm wondering if the
> documentation could be slightly modified too. I think "external mempool
> manager" was the legacy name for the feature, but maybe it could be
> changed in "alternative mempool handlers" or "changing the mempool
> handler". I mean the word "external" is probably not appropriate now,
> especially if we add other handlers in the mempool lib.
>
> My 2 cents,
> Olivier

I had not planned on doing another revision. And I think the term "External
Mempool Manager" accurately describes the functionality, so I'd really
prefer to leave it as it is.

Regards,
David.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v13 1/3] mempool: support external mempool operations
  2016-06-17  8:42                               ` Hunt, David
@ 2016-06-17  9:09                                 ` Thomas Monjalon
  2016-06-17  9:24                                   ` Hunt, David
  0 siblings, 1 reply; 238+ messages in thread
From: Thomas Monjalon @ 2016-06-17  9:09 UTC (permalink / raw)
  To: Hunt, David; +Cc: dev, Olivier Matz, viktorin, jerin.jacob, shreyansh.jain

2016-06-17 09:42, Hunt, David:
> 
> On 17/6/2016 9:08 AM, Olivier Matz wrote:
> > Hi David,
> >
> > On 06/17/2016 08:58 AM, Hunt, David wrote:
> >> A comment below:
> >>
> >> On 16/6/2016 1:30 PM, David Hunt wrote:
> >>> +/**
> >>> + * Set the ops of a mempool.
> >>> + *
> >>> + * This can only be done on a mempool that is not populated, i.e.
> >>> just after
> >>> + * a call to rte_mempool_create_empty().
> >>> + *
> >>> + * @param mp
> >>> + *   Pointer to the memory pool.
> >>> + * @param name
> >>> + *   Name of the ops structure to use for this mempool.
> >> + * @param pool_config
> >> + *   Opaque data that can be used by the ops functions.
> >>> + * @return
> >>> + *   - 0: Success; the mempool is now using the requested ops functions.
> >>> + *   - -EINVAL - Invalid ops struct name provided.
> >>> + *   - -EEXIST - mempool already has an ops struct assigned.
> >>> + */
> >>> +int
> >>> +rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
> >>> +        void *pool_config);
> >>> +
> >>
> > The changes related to the pool_config look good to me.
> >
> > If you plan to do a v14 for this API comment, I'm wondering if the
> > documentation could be slightly modified too. I think "external mempool
> > manager" was the legacy name for the feature, but maybe it could be
> > changed in "alternative mempool handlers" or "changing the mempool
> > handler". I mean the word "external" is probably not appropriate now,
> > especially if we add other handlers in the mempool lib.
> >
> > My 2 cents,
> > Olivier
> 
> I had not planned on doing another revision. And I think the term "External
> Mempool Manager" accurately describes the functionality, so I'd really
> prefer to leave it as it is.

I think there is no manager, just a default handler which can be changed.
I agree the documentation must be fixed.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v13 1/3] mempool: support external mempool operations
  2016-06-17  9:09                                 ` Thomas Monjalon
@ 2016-06-17  9:24                                   ` Hunt, David
  2016-06-17 10:19                                     ` Olivier Matz
  0 siblings, 1 reply; 238+ messages in thread
From: Hunt, David @ 2016-06-17  9:24 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, Olivier Matz, viktorin, jerin.jacob, shreyansh.jain



On 17/6/2016 10:09 AM, Thomas Monjalon wrote:
> 2016-06-17 09:42, Hunt, David:
>> On 17/6/2016 9:08 AM, Olivier Matz wrote:
>>> Hi David,
>>>
>>> On 06/17/2016 08:58 AM, Hunt, David wrote:
>>>> A comment below:
>>>>
>>>> On 16/6/2016 1:30 PM, David Hunt wrote:
>>>>> +/**
>>>>> + * Set the ops of a mempool.
>>>>> + *
>>>>> + * This can only be done on a mempool that is not populated, i.e.
>>>>> just after
>>>>> + * a call to rte_mempool_create_empty().
>>>>> + *
>>>>> + * @param mp
>>>>> + *   Pointer to the memory pool.
>>>>> + * @param name
>>>>> + *   Name of the ops structure to use for this mempool.
>>>> + * @param pool_config
>>>> + *   Opaque data that can be used by the ops functions.
>>>>> + * @return
>>>>> + *   - 0: Success; the mempool is now using the requested ops functions.
>>>>> + *   - -EINVAL - Invalid ops struct name provided.
>>>>> + *   - -EEXIST - mempool already has an ops struct assigned.
>>>>> + */
>>>>> +int
>>>>> +rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
>>>>> +        void *pool_config);
>>>>> +
>>> The changes related to the pool_config look good to me.
>>>
>>> If you plan to do a v14 for this API comment, I'm wondering if the
>>> documentation could be slightly modified too. I think "external mempool
>>> manager" was the legacy name for the feature, but maybe it could be
>>> changed in "alternative mempool handlers" or "changing the mempool
>>> handler". I mean the word "external" is probably not appropriate now,
>>> especially if we add other handlers in the mempool lib.
>>>
>>> My 2 cents,
>>> Olivier
>> I had not planned on doing another revision. And I think the term "External
>> Mempool Manager" accurately describes the functionality, so I'd really
>> prefer to leave it as it is.
> I think there is no manager, just a default handler which can be changed.
> I agree the documentation must be fixed.

OK, I have two suggestions to add into the mix.
1. mempool handler framework
or simply
2. mempool handlers. (the alternative is implied). "The mempool handler 
feature", etc.

Thoughts?
David.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v13 1/3] mempool: support external mempool operations
  2016-06-16 12:30                         ` [PATCH v13 1/3] mempool: support external mempool operations David Hunt
  2016-06-17  6:58                           ` Hunt, David
@ 2016-06-17 10:18                           ` Olivier Matz
  2016-06-17 10:47                             ` Hunt, David
  1 sibling, 1 reply; 238+ messages in thread
From: Olivier Matz @ 2016-06-17 10:18 UTC (permalink / raw)
  To: David Hunt, dev; +Cc: viktorin, jerin.jacob, shreyansh.jain

Hi David,

While testing Lazaros' patch, I found an issue in this series.
I the test application is started with --no-huge, it does not work,
the mempool_autotest does not work. Please find the exaplanation
below:

On 06/16/2016 02:30 PM, David Hunt wrote:
> @@ -386,9 +352,9 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
>  	int ret;
>  
>  	/* create the internal ring if not already done */
> -	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
> -		ret = rte_mempool_ring_create(mp);
> -		if (ret < 0)
> +	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
> +		ret = rte_mempool_ops_alloc(mp);
> +		if (ret != 0)
>  			return ret;
>  	}
>  

Previously, the function rte_mempool_ring_create(mp) was setting the
MEMPOOL_F_RING_CREATED flag. Now it is not set. I think we could
set it just after the "return ret".

When started with --no-huge, the mempool memory is allocated in
several chunks (one per page), so it tries to allocate the ring for
each chunk.

Regards,
Olivier

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v13 1/3] mempool: support external mempool operations
  2016-06-17  9:24                                   ` Hunt, David
@ 2016-06-17 10:19                                     ` Olivier Matz
  0 siblings, 0 replies; 238+ messages in thread
From: Olivier Matz @ 2016-06-17 10:19 UTC (permalink / raw)
  To: Hunt, David, Thomas Monjalon; +Cc: dev, viktorin, jerin.jacob, shreyansh.jain

Hi David,

>>>> If you plan to do a v14 for this API comment, I'm wondering if the
>>>> documentation could be slightly modified too. I think "external mempool
>>>> manager" was the legacy name for the feature, but maybe it could be
>>>> changed in "alternative mempool handlers" or "changing the mempool
>>>> handler". I mean the word "external" is probably not appropriate now,
>>>> especially if we add other handlers in the mempool lib.
>>>>
>>>> My 2 cents,
>>>> Olivier
>>> I had not planned on doing another revision. And I think the term
>>> "External
>>> Mempool Manager" accurately describes the functionality, so I'd really
>>> prefer to leave it as it is.
>> I think there is no manager, just a default handler which can be changed.
>> I agree the documentation must be fixed.
> 
> OK, I have two suggestions to add into the mix.
> 1. mempool handler framework
> or simply
> 2. mempool handlers. (the alternative is implied). "The mempool handler
> feature", etc.

Option 2 is fine for me.

Thanks!

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v13 1/3] mempool: support external mempool operations
  2016-06-17 10:18                           ` Olivier Matz
@ 2016-06-17 10:47                             ` Hunt, David
  0 siblings, 0 replies; 238+ messages in thread
From: Hunt, David @ 2016-06-17 10:47 UTC (permalink / raw)
  To: Olivier Matz, dev; +Cc: viktorin, jerin.jacob, shreyansh.jain


On 17/6/2016 11:18 AM, Olivier Matz wrote:
> Hi David,
>
> While testing Lazaros' patch, I found an issue in this series.
> I the test application is started with --no-huge, it does not work,
> the mempool_autotest does not work. Please find the exaplanation
> below:
>
> On 06/16/2016 02:30 PM, David Hunt wrote:
>> @@ -386,9 +352,9 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
>>   	int ret;
>>   
>>   	/* create the internal ring if not already done */
>> -	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
>> -		ret = rte_mempool_ring_create(mp);
>> -		if (ret < 0)
>> +	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
>> +		ret = rte_mempool_ops_alloc(mp);
>> +		if (ret != 0)
>>   			return ret;
>>   	}
>>   
> Previously, the function rte_mempool_ring_create(mp) was setting the
> MEMPOOL_F_RING_CREATED flag. Now it is not set. I think we could
> set it just after the "return ret".
>
> When started with --no-huge, the mempool memory is allocated in
> several chunks (one per page), so it tries to allocate the ring for
> each chunk.
>
> Regards,
> Olivier

OK, Will do.

                 ret = rte_mempool_ops_alloc(mp);
                 if (ret != 0)
                         return ret;
+               mp->flags |= MEMPOOL_F_POOL_CREATED;

Rgds,
Dave.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v14 0/3] mempool: add mempool handler feature
  2016-06-16 12:30                       ` [PATCH v13 " David Hunt
                                           ` (2 preceding siblings ...)
  2016-06-16 12:30                         ` [PATCH v13 3/3] mbuf: make default mempool ops configurable at build David Hunt
@ 2016-06-17 13:53                         ` David Hunt
  2016-06-17 13:53                           ` [PATCH v14 1/3] mempool: support mempool handler operations David Hunt
                                             ` (3 more replies)
  3 siblings, 4 replies; 238+ messages in thread
From: David Hunt @ 2016-06-17 13:53 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain

Here's the latest version of the Mempool Handler feature (previously
known as the External Mempool Manager.

It's re-based on top of the latest head as of 17/6/2016, including
Olivier's 35-part patch series on mempool re-org [1]

[1] http://dpdk.org/ml/archives/dev/2016-May/039229.html

v14 changes:

 * set MEMPOOL_F_RING_CREATED flag after rte_mempool_ring_create() is called.
 * Changed name of feature from "external mempool manager" to "mempool handler"
   and updated comments and release notes accordingly.
 * Added a comment for newly added pool_config param in
   rte_mempool_set_ops_byname.

v13 changes:

 * Added in extra opaque data (pool_config) to mempool struct for mempool
   configuration by the ops functions. For example, this can be used to pass
  device names or device flags to the underlying alloc function.
 * Added mempool_config param to rte_mempool_set_ops_byname()

v12 changes:

 * Fixed a comment (function pram h -> ops)
 * fixed a typo (callbacki)

v11 changes:

 * Fixed comments (added '.' where needed for consistency)
 * removed ABI breakage notice for mempool manager in deprecation.rst
 * Added description of the external mempool manager functionality to
   doc/guides/prog_guide/mempool_lib.rst (John Mc reviewed)
 * renamed rte_mempool_default.c to rte_mempool_ring.c

v10 changes:

 * changed the _put/_get op names to _enqueue/_dequeue to be consistent
   with the function names
 * some rte_errno cleanup
 * comment tweaks about when to set pool_data
 * removed an un-needed check for ops->alloc == NULL

v9 changes:

 * added a check for NULL alloc in rte_mempool_ops_register
 * rte_mempool_alloc_t now returns int instead of void*
 * fixed some comment typo's
 * removed some unneeded typecasts
 * changed a return NULL to return -EEXIST in rte_mempool_ops_register
 * fixed rte_mempool_version.map file so builds ok as shared libs
 * moved flags check from rte_mempool_create_empty to rte_mempool_create

v8 changes:

 * merged first three patches in the series into one.
 * changed parameters to ops callback to all be rte_mempool pointer
   rather than than pointer to opaque data or uint64.
 * comment fixes.
 * fixed parameter to _free function (was inconsistent).
 * changed MEMPOOL_F_RING_CREATED to MEMPOOL_F_POOL_CREATED

v7 changes:

 * Changed rte_mempool_handler_table to rte_mempool_ops_table
 * Changed hander_idx to ops_index in rte_mempool struct
 * Reworked comments in rte_mempool.h around ops functions
 * Changed rte_mempool_hander.c to rte_mempool_ops.c
 * Changed all functions containing _handler_ to _ops_
 * Now there is no mention of 'handler' left
 * Other small changes out of review of mailing list

v6 changes:

 * Moved the flags handling from rte_mempool_create_empty to
   rte_mempool_create, as it's only there for backward compatibility
 * Various comment additions and cleanup
 * Renamed rte_mempool_handler to rte_mempool_ops
 * Added a union for *pool and u64 pool_id in struct rte_mempool
 * split the original patch into a few parts for easier review.
 * rename functions with _ext_ to _ops_.
 * addressed review comments
 * renamed put and get functions to enqueue and dequeue
 * changed occurences of rte_mempool_ops to const, as they
   contain function pointers (security)
 * split out the default external mempool handler into a separate
   patch for easier review

v5 changes:
 * rebasing, as it is dependent on another patch series [1]

v4 changes (Olivier Matz):
 * remove the rte_mempool_create_ext() function. To change the handler, the
   user has to do the following:
   - mp = rte_mempool_create_empty()
   - rte_mempool_set_handler(mp, "my_handler")
   - rte_mempool_populate_default(mp)
   This avoids to add another function with more than 10 arguments, duplicating
   the doxygen comments
 * change the api of rte_mempool_alloc_t: only the mempool pointer is required
   as all information is available in it
 * change the api of rte_mempool_free_t: remove return value
 * move inline wrapper functions from the .c to the .h (else they won't be
   inlined). This implies to have one header file (rte_mempool.h), or it
   would have generate cross dependencies issues.
 * remove now unused MEMPOOL_F_INT_HANDLER (note: it was misused anyway due
   to the use of && instead of &)
 * fix build in debug mode (__MEMPOOL_STAT_ADD(mp, put_pool, n) remaining)
 * fix build with shared libraries (global handler has to be declared in
   the .map file)
 * rationalize #include order
 * remove unused function rte_mempool_get_handler_name()
 * rename some structures, fields, functions
 * remove the static in front of rte_tailq_elem rte_mempool_tailq (comment
   from Yuanhan)
 * test the ext mempool handler in the same file than standard mempool tests,
   avoiding to duplicate the code
 * rework the custom handler in mempool_test
 * rework a bit the patch selecting default mbuf pool handler
 * fix some doxygen comments

v3 changes:
 * simplified the file layout, renamed to rte_mempool_handler.[hc]
 * moved the default handlers into rte_mempool_default.c
 * moved the example handler out into app/test/test_ext_mempool.c
 * removed is_mc/is_mp change, slight perf degredation on sp cached operation
 * removed stack hanler, may re-introduce at a later date
 * Changes out of code reviews

v2 changes:
 * There was a lot of duplicate code between rte_mempool_xmem_create and
   rte_mempool_create_ext. This has now been refactored and is now
   hopefully cleaner.
 * The RTE_NEXT_ABI define is now used to allow building of the library
   in a format that is compatible with binaries built against previous
   versions of DPDK.
 * Changes out of code reviews. Hopefully I've got most of them included.

The Mempool Handler feature is an extension to the mempool API that allows
users to add and use an alternative mempool handler, which allows
external memory subsystems such as external hardware memory management
systems and software based memory allocators to be used with DPDK.

The existing API to the internal DPDK mempool handler will remain unchanged
and will be backward compatible. However, there will be an ABI breakage, as
the mempool struct is changing.

There are two aspects to mempool handlers.
  1. Adding the code for your new mempool operations (ops). This is
     achieved by adding a new mempool ops source file into the
     librte_mempool library, and using the REGISTER_MEMPOOL_OPS macro.
  2. Using the new API to call rte_mempool_create_empty and
     rte_mempool_set_ops_byname to create a new mempool
     using the name parameter to identify which ops to use.

New API calls added
 1. A new rte_mempool_create_empty() function
 2. rte_mempool_set_ops_byname() which sets the mempools ops (functions)
 3. An rte_mempool_populate_default() and rte_mempool_populate_anon() functions
    which populates the mempool using the relevant ops

Several mempool handlers may be used in the same application. A new
mempool can then be created by using the new rte_mempool_create_empty function,
then calling rte_mempool_set_ops_byname to point the mempool to the relevant
mempool handler callback (ops) structure.

Legacy applications will continue to use the old rte_mempool_create API call,
which uses a ring based mempool handler by default. These applications
will need to be modified to use a new mempool handler.

A mempool handler needs to provide the following functions.
 1. alloc     - allocates the mempool memory, and adds each object onto a ring
 2. enqueue   - puts an object back into the mempool once an application has
                finished with it
 3. dequeue   - gets an object from the mempool for use by the application
 4. get_count - gets the number of available objects in the mempool
 5. free      - frees the mempool memory

Every time an enqueue/dequeue/get_count is called from the application/PMD,
the callback for that mempool is called. These functions are in the fastpath,
and any unoptimised ops may limit performance.

The new APIs are as follows:

1. rte_mempool_create_empty

struct rte_mempool *
rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
    unsigned cache_size, unsigned private_data_size,
    int socket_id, unsigned flags);

2. rte_mempool_set_ops_byname()

int
rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name
    void *pool_config);

3. rte_mempool_populate_default()

int rte_mempool_populate_default(struct rte_mempool *mp);

4. rte_mempool_populate_anon()

int rte_mempool_populate_anon(struct rte_mempool *mp);

Please see rte_mempool.h for further information on the parameters.


The important thing to note is that the mempool ops struct is passed by name
to rte_mempool_set_ops_byname, which looks through the ops struct array to
get the ops_index, which is then stored in the rte_memool structure. This
allow multiple processes to use the same mempool, as the function pointers
are accessed via ops index.

The mempool ops structure contains callbacks to the implementation of
the ops function, and is set up for registration as follows:

static const struct rte_mempool_ops ops_sp_mc = {
    .name = "ring_sp_mc",
    .alloc = rte_mempool_common_ring_alloc,
    .enqueue = common_ring_sp_enqueue,
    .dequeue = common_ring_mc_dequeue,
    .get_count = common_ring_get_count,
    .free = common_ring_free,
};

And then the following macro will register the ops in the array of ops
structures

REGISTER_MEMPOOL_OPS(ops_mp_mc);

For an example of API usage, please see app/test/test_mempool.c, which
implements a rudimentary "custom_handler" mempool handler using simple mallocs
for each mempool object. This file also contains the callbacks and self
registration for the new handler.

David Hunt (2):
  mempool: support mempool handler operations
  mbuf: make default mempool ops configurable at build

Olivier Matz (1):
  app/test: test mempool handler

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v14 1/3] mempool: support mempool handler operations
  2016-06-17 13:53                         ` [PATCH v14 0/3] mempool: add mempool handler feature David Hunt
@ 2016-06-17 13:53                           ` David Hunt
  2016-06-17 14:35                             ` Jan Viktorin
  2016-06-17 13:53                           ` [PATCH v14 2/3] app/test: test mempool handler David Hunt
                                             ` (2 subsequent siblings)
  3 siblings, 1 reply; 238+ messages in thread
From: David Hunt @ 2016-06-17 13:53 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain, David Hunt

Until now, the objects stored in a mempool were internally stored in a
ring. This patch introduces the possibility to register external handlers
replacing the ring.

The default behavior remains unchanged, but calling the new function
rte_mempool_set_ops_byname() right after rte_mempool_create_empty() allows
the user to change the handler that will be used when populating
the mempool.

This patch also adds a set of default ops (function callbacks) based
on rte_ring.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test/test_mempool_perf.c               |   1 -
 doc/guides/prog_guide/mempool_lib.rst      |  32 +++-
 doc/guides/rel_notes/deprecation.rst       |   9 -
 lib/librte_mempool/Makefile                |   2 +
 lib/librte_mempool/rte_mempool.c           |  67 +++-----
 lib/librte_mempool/rte_mempool.h           | 255 ++++++++++++++++++++++++++---
 lib/librte_mempool/rte_mempool_ops.c       | 150 +++++++++++++++++
 lib/librte_mempool/rte_mempool_ring.c      | 161 ++++++++++++++++++
 lib/librte_mempool/rte_mempool_version.map |  13 +-
 9 files changed, 609 insertions(+), 81 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_ops.c
 create mode 100644 lib/librte_mempool/rte_mempool_ring.c

diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index c5e3576..c5f8455 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
 							   n_get_bulk);
 				if (unlikely(ret < 0)) {
 					rte_mempool_dump(stdout, mp);
-					rte_ring_dump(stdout, mp->ring);
 					/* in this case, objects are lost... */
 					return -1;
 				}
diff --git a/doc/guides/prog_guide/mempool_lib.rst b/doc/guides/prog_guide/mempool_lib.rst
index c3afc2e..1943fc4 100644
--- a/doc/guides/prog_guide/mempool_lib.rst
+++ b/doc/guides/prog_guide/mempool_lib.rst
@@ -34,7 +34,8 @@ Mempool Library
 ===============
 
 A memory pool is an allocator of a fixed-sized object.
-In the DPDK, it is identified by name and uses a ring to store free objects.
+In the DPDK, it is identified by name and uses a mempool handler to store free objects.
+The default mempool handler is ring based.
 It provides some other optional services such as a per-core object cache and
 an alignment helper to ensure that objects are padded to spread them equally on all DRAM or DDR3 channels.
 
@@ -127,6 +128,35 @@ The maximum size of the cache is static and is defined at compilation time (CONF
    A mempool in Memory with its Associated Ring
 
 
+Mempool Handlers
+------------------------
+
+This allows external memory subsystems, such as external hardware memory
+management systems and software based memory allocators, to be used with DPDK.
+
+There are two aspects to a mempool handler.
+
+* Adding the code for your new mempool operations (ops). This is achieved by
+  adding a new mempool ops code, and using the ``REGISTER_MEMPOOL_OPS`` macro.
+
+* Using the new API to call ``rte_mempool_create_empty()`` and
+  ``rte_mempool_set_ops_byname()`` to create a new mempool and specifying which
+  ops to use.
+
+Several different mempool handlers may be used in the same application. A new
+mempool can be created by using the ``rte_mempool_create_empty()`` function,
+then using ``rte_mempool_set_ops_byname()`` to point the mempool to the
+relevant mempool handler callback (ops) structure.
+
+Legacy applications may continue to use the old ``rte_mempool_create()`` API
+call, which uses a ring based mempool handler by default. These applications
+will need to be modified to use a new mempool handler.
+
+For applications that use ``rte_pktmbuf_create()``, there is a config setting
+(``RTE_MBUF_DEFAULT_MEMPOOL_OPS``) that allows the application to make use of
+an alternative mempool handler.
+
+
 Use Cases
 ---------
 
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index f75183f..3cbc19e 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -34,15 +34,6 @@ Deprecation Notices
   compact API. The ones that remain are backwards compatible and use the
   per-lcore default cache if available. This change targets release 16.07.
 
-* The rte_mempool struct will be changed in 16.07 to facilitate the new
-  external mempool manager functionality.
-  The ring element will be replaced with a more generic 'pool' opaque pointer
-  to allow new mempool handlers to use their own user-defined mempool
-  layout. Also newly added to rte_mempool is a handler index.
-  The existing API will be backward compatible, but there will be new API
-  functions added to facilitate the creation of mempools using an external
-  handler. The 16.07 release will contain these changes.
-
 * A librte_vhost public structures refactor is planned for DPDK 16.07
   that requires both ABI and API change.
   The proposed refactor would expose DPDK vhost dev to applications as
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 43423e0..a4c089e 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -42,6 +42,8 @@ LIBABIVER := 2
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ring.c
 # install includes
 SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
 
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index af71edd..e6a83d0 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -148,7 +148,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, phys_addr_t physaddr)
 #endif
 
 	/* enqueue in ring */
-	rte_ring_sp_enqueue(mp->ring, obj);
+	rte_mempool_ops_enqueue_bulk(mp, &obj, 1);
 }
 
 /* call obj_cb() for each mempool element */
@@ -303,40 +303,6 @@ rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	return (size_t)paddr_idx << pg_shift;
 }
 
-/* create the internal ring */
-static int
-rte_mempool_ring_create(struct rte_mempool *mp)
-{
-	int rg_flags = 0, ret;
-	char rg_name[RTE_RING_NAMESIZE];
-	struct rte_ring *r;
-
-	ret = snprintf(rg_name, sizeof(rg_name),
-		RTE_MEMPOOL_MZ_FORMAT, mp->name);
-	if (ret < 0 || ret >= (int)sizeof(rg_name))
-		return -ENAMETOOLONG;
-
-	/* ring flags */
-	if (mp->flags & MEMPOOL_F_SP_PUT)
-		rg_flags |= RING_F_SP_ENQ;
-	if (mp->flags & MEMPOOL_F_SC_GET)
-		rg_flags |= RING_F_SC_DEQ;
-
-	/* Allocate the ring that will be used to store objects.
-	 * Ring functions will return appropriate errors if we are
-	 * running as a secondary process etc., so no checks made
-	 * in this function for that condition.
-	 */
-	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
-		mp->socket_id, rg_flags);
-	if (r == NULL)
-		return -rte_errno;
-
-	mp->ring = r;
-	mp->flags |= MEMPOOL_F_RING_CREATED;
-	return 0;
-}
-
 /* free a memchunk allocated with rte_memzone_reserve() */
 static void
 rte_mempool_memchunk_mz_free(__rte_unused struct rte_mempool_memhdr *memhdr,
@@ -354,7 +320,7 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
 	void *elt;
 
 	while (!STAILQ_EMPTY(&mp->elt_list)) {
-		rte_ring_sc_dequeue(mp->ring, &elt);
+		rte_mempool_ops_dequeue_bulk(mp, &elt, 1);
 		(void)elt;
 		STAILQ_REMOVE_HEAD(&mp->elt_list, next);
 		mp->populated_size--;
@@ -386,10 +352,11 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	int ret;
 
 	/* create the internal ring if not already done */
-	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
-		ret = rte_mempool_ring_create(mp);
-		if (ret < 0)
+	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
+		ret = rte_mempool_ops_alloc(mp);
+		if (ret != 0)
 			return ret;
+		mp->flags |= MEMPOOL_F_POOL_CREATED;
 	}
 
 	/* mempool is already populated */
@@ -703,7 +670,7 @@ rte_mempool_free(struct rte_mempool *mp)
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
 
 	rte_mempool_free_memchunks(mp);
-	rte_ring_free(mp->ring);
+	rte_mempool_ops_free(mp);
 	rte_memzone_free(mp->mz);
 }
 
@@ -815,6 +782,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 		RTE_PTR_ADD(mp, MEMPOOL_HEADER_SIZE(mp, 0));
 
 	te->data = mp;
+
 	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
 	TAILQ_INSERT_TAIL(mempool_list, te, next);
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
@@ -844,6 +812,19 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
 	if (mp == NULL)
 		return NULL;
 
+	/*
+	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
+	 * set the correct index into the table of ops structs.
+	 */
+	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
+		rte_mempool_set_ops_byname(mp, "ring_sp_sc", NULL);
+	else if (flags & MEMPOOL_F_SP_PUT)
+		rte_mempool_set_ops_byname(mp, "ring_sp_mc", NULL);
+	else if (flags & MEMPOOL_F_SC_GET)
+		rte_mempool_set_ops_byname(mp, "ring_mp_sc", NULL);
+	else
+		rte_mempool_set_ops_byname(mp, "ring_mp_mc", NULL);
+
 	/* call the mempool priv initializer */
 	if (mp_init)
 		mp_init(mp, mp_init_arg);
@@ -930,7 +911,7 @@ rte_mempool_count(const struct rte_mempool *mp)
 	unsigned count;
 	unsigned lcore_id;
 
-	count = rte_ring_count(mp->ring);
+	count = rte_mempool_ops_get_count(mp);
 
 	if (mp->cache_size == 0)
 		return count;
@@ -1119,7 +1100,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 
 	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
 	fprintf(f, "  flags=%x\n", mp->flags);
-	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
+	fprintf(f, "  pool=%p\n", mp->pool_data);
 	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->mz->phys_addr);
 	fprintf(f, "  nb_mem_chunks=%u\n", mp->nb_mem_chunks);
 	fprintf(f, "  size=%"PRIu32"\n", mp->size);
@@ -1140,7 +1121,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 	}
 
 	cache_count = rte_mempool_dump_cache(f, mp);
-	common_count = rte_ring_count(mp->ring);
+	common_count = rte_mempool_ops_get_count(mp);
 	if ((cache_count + common_count) > mp->size)
 		common_count = mp->size - cache_count;
 	fprintf(f, "  common_pool_count=%u\n", common_count);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 60339bd..2d7c980 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -67,6 +67,7 @@
 #include <inttypes.h>
 #include <sys/queue.h>
 
+#include <rte_spinlock.h>
 #include <rte_log.h>
 #include <rte_debug.h>
 #include <rte_lcore.h>
@@ -203,10 +204,14 @@ struct rte_mempool_memhdr {
  */
 struct rte_mempool {
 	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
-	struct rte_ring *ring;           /**< Ring to store objects. */
-	const struct rte_memzone *mz;    /**< Memzone where pool is allocated */
+	union {
+		void *pool_data;         /**< Ring or pool to store objects. */
+		uint64_t pool_id;        /**< External mempool identifier. */
+	};
+	void *pool_config;               /**< optional args for ops alloc. */
+	const struct rte_memzone *mz;    /**< Memzone where pool is alloc'd. */
 	int flags;                       /**< Flags of the mempool. */
-	int socket_id;                   /**< Socket id passed at mempool creation. */
+	int socket_id;                   /**< Socket id passed at create. */
 	uint32_t size;                   /**< Max size of the mempool. */
 	uint32_t cache_size;             /**< Size of per-lcore local cache. */
 	uint32_t cache_flushthresh;
@@ -217,6 +222,14 @@ struct rte_mempool {
 	uint32_t trailer_size;           /**< Size of trailer (after elt). */
 
 	unsigned private_data_size;      /**< Size of private data. */
+	/**
+	 * Index into rte_mempool_ops_table array of mempool ops
+	 * structs, which contain callback function pointers.
+	 * We're using an index here rather than pointers to the callbacks
+	 * to facilitate any secondary processes that may want to use
+	 * this mempool.
+	 */
+	int32_t ops_index;
 
 	struct rte_mempool_cache *local_cache; /**< Per-lcore local cache */
 
@@ -235,7 +248,7 @@ struct rte_mempool {
 #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
 #define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
-#define MEMPOOL_F_RING_CREATED   0x0010 /**< Internal: ring is created */
+#define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
 
 /**
@@ -325,6 +338,215 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 #define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
 #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
 
+#define RTE_MEMPOOL_OPS_NAMESIZE 32 /**< Max length of ops struct name. */
+
+/**
+ * Prototype for implementation specific data provisioning function.
+ *
+ * The function should provide the implementation specific memory for
+ * for use by the other mempool ops functions in a given mempool ops struct.
+ * E.g. the default ops provides an instance of the rte_ring for this purpose.
+ * it will most likely point to a different type of data structure, and
+ * will be transparent to the application programmer.
+ * This function should set mp->pool_data.
+ */
+typedef int (*rte_mempool_alloc_t)(struct rte_mempool *mp);
+
+/**
+ * Free the opaque private data pointed to by mp->pool_data pointer.
+ */
+typedef void (*rte_mempool_free_t)(struct rte_mempool *mp);
+
+/**
+ * Enqueue an object into the external pool.
+ */
+typedef int (*rte_mempool_enqueue_t)(struct rte_mempool *mp,
+		void * const *obj_table, unsigned int n);
+
+/**
+ * Dequeue an object from the external pool.
+ */
+typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
+		void **obj_table, unsigned int n);
+
+/**
+ * Return the number of available objects in the external pool.
+ */
+typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
+
+/** Structure defining mempool operations structure */
+struct rte_mempool_ops {
+	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
+	rte_mempool_alloc_t alloc;       /**< Allocate private data. */
+	rte_mempool_free_t free;         /**< Free the external pool. */
+	rte_mempool_enqueue_t enqueue;   /**< Enqueue an object. */
+	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
+	rte_mempool_get_count get_count; /**< Get qty of available objs. */
+} __rte_cache_aligned;
+
+#define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
+
+/**
+ * Structure storing the table of registered ops structs, each of which contain
+ * the function pointers for the mempool ops functions.
+ * Each process has its own storage for this ops struct array so that
+ * the mempools can be shared across primary and secondary processes.
+ * The indices used to access the array are valid across processes, whereas
+ * any function pointers stored directly in the mempool struct would not be.
+ * This results in us simply having "ops_index" in the mempool struct.
+ */
+struct rte_mempool_ops_table {
+	rte_spinlock_t sl;     /**< Spinlock for add/delete. */
+	uint32_t num_ops;      /**< Number of used ops structs in the table. */
+	/**
+	 * Storage for all possible ops structs.
+	 */
+	struct rte_mempool_ops ops[RTE_MEMPOOL_MAX_OPS_IDX];
+} __rte_cache_aligned;
+
+/** Array of registered ops structs. */
+extern struct rte_mempool_ops_table rte_mempool_ops_table;
+
+/**
+ * @internal Get the mempool ops struct from its index.
+ *
+ * @param ops_index
+ *   The index of the ops struct in the ops struct table. It must be a valid
+ *   index: (0 <= idx < num_ops).
+ * @return
+ *   The pointer to the ops struct in the table.
+ */
+static inline struct rte_mempool_ops *
+rte_mempool_ops_get(int ops_index)
+{
+	RTE_VERIFY(ops_index < RTE_MEMPOOL_MAX_OPS_IDX);
+
+	return &rte_mempool_ops_table.ops[ops_index];
+}
+
+/**
+ * @internal Wrapper for mempool_ops alloc callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   - 0: Success; successfully allocated mempool pool_data.
+ *   - <0: Error; code of alloc function.
+ */
+int
+rte_mempool_ops_alloc(struct rte_mempool *mp);
+
+/**
+ * @internal Wrapper for mempool_ops get callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to get.
+ * @return
+ *   - 0: Success; got n objects.
+ *   - <0: Error; code of get function.
+ */
+static inline int
+rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
+		void **obj_table, unsigned n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->dequeue(mp, obj_table, n);
+}
+
+/**
+ * @internal wrapper for mempool_ops put callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to put.
+ * @return
+ *   - 0: Success; n objects supplied.
+ *   - <0: Error; code of put function.
+ */
+static inline int
+rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->enqueue(mp, obj_table, n);
+}
+
+/**
+ * @internal wrapper for mempool_ops get_count callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   The number of available objects in the external pool.
+ */
+unsigned
+rte_mempool_ops_get_count(const struct rte_mempool *mp);
+
+/**
+ * @internal wrapper for mempool_ops free callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ */
+void
+rte_mempool_ops_free(struct rte_mempool *mp);
+
+/**
+ * Set the ops of a mempool.
+ *
+ * This can only be done on a mempool that is not populated, i.e. just after
+ * a call to rte_mempool_create_empty().
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param name
+ *   Name of the ops structure to use for this mempool.
+ * @param pool_config
+ *   Opaque data that can be passed by the application to the ops functions.
+ * @return
+ *   - 0: Success; the mempool is now using the requested ops functions.
+ *   - -EINVAL - Invalid ops struct name provided.
+ *   - -EEXIST - mempool already has an ops struct assigned.
+ */
+int
+rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
+		void *pool_config);
+
+/**
+ * Register mempool operations.
+ *
+ * @param ops
+ *   Pointer to an ops structure to register.
+ * @return
+ *   - >=0: Success; return the index of the ops struct in the table.
+ *   - -EINVAL - some missing callbacks while registering ops struct.
+ *   - -ENOSPC - the maximum number of ops structs has been reached.
+ */
+int rte_mempool_ops_register(const struct rte_mempool_ops *ops);
+
+/**
+ * Macro to statically register the ops of a mempool handler.
+ * Note that the rte_mempool_ops_register fails silently here when
+ * more then RTE_MEMPOOL_MAX_OPS_IDX is registered.
+ */
+#define MEMPOOL_REGISTER_OPS(ops)					\
+	void mp_hdlr_init_##ops(void);					\
+	void __attribute__((constructor, used)) mp_hdlr_init_##ops(void)\
+	{								\
+		rte_mempool_ops_register(&ops);			\
+	}
+
 /**
  * An object callback function for mempool.
  *
@@ -774,7 +996,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 	cache->len += n;
 
 	if (cache->len >= flushthresh) {
-		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
+		rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache_size],
 				cache->len - cache_size);
 		cache->len = cache_size;
 	}
@@ -785,19 +1007,10 @@ ring_enqueue:
 
 	/* push remaining objects in ring */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	if (is_mp) {
-		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
-	else {
-		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
+	if (rte_mempool_ops_enqueue_bulk(mp, obj_table, n) < 0)
+		rte_panic("cannot put objects in mempool\n");
 #else
-	if (is_mp)
-		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
-	else
-		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
+	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);
 #endif
 }
 
@@ -945,7 +1158,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 		uint32_t req = n + (cache_size - cache->len);
 
 		/* How many do we require i.e. number to fill the cache + the request */
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
+		ret = rte_mempool_ops_dequeue_bulk(mp,
+			&cache->objs[cache->len], req);
 		if (unlikely(ret < 0)) {
 			/*
 			 * In the offchance that we are buffer constrained,
@@ -972,10 +1186,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 ring_dequeue:
 
 	/* get remaining objects from ring */
-	if (is_mc)
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
-	else
-		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
+	ret = rte_mempool_ops_dequeue_bulk(mp, obj_table, n);
 
 	if (ret < 0)
 		__MEMPOOL_STAT_ADD(mp, get_fail, n);
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
new file mode 100644
index 0000000..7977a14
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -0,0 +1,150 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2016 6WIND S.A.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_mempool.h>
+#include <rte_errno.h>
+
+/* indirect jump table to support external memory pools. */
+struct rte_mempool_ops_table rte_mempool_ops_table = {
+	.sl =  RTE_SPINLOCK_INITIALIZER,
+	.num_ops = 0
+};
+
+/* add a new ops struct in rte_mempool_ops_table, return its index. */
+int
+rte_mempool_ops_register(const struct rte_mempool_ops *h)
+{
+	struct rte_mempool_ops *ops;
+	int16_t ops_index;
+
+	rte_spinlock_lock(&rte_mempool_ops_table.sl);
+
+	if (rte_mempool_ops_table.num_ops >=
+			RTE_MEMPOOL_MAX_OPS_IDX) {
+		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Maximum number of mempool ops structs exceeded\n");
+		return -ENOSPC;
+	}
+
+	if (h->alloc == NULL || h->enqueue == NULL ||
+			h->dequeue == NULL || h->get_count == NULL) {
+		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Missing callback while registering mempool ops\n");
+		return -EINVAL;
+	}
+
+	if (strlen(h->name) >= sizeof(ops->name) - 1) {
+		RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n",
+				__func__, h->name);
+		rte_errno = EEXIST;
+		return -EEXIST;
+	}
+
+	ops_index = rte_mempool_ops_table.num_ops++;
+	ops = &rte_mempool_ops_table.ops[ops_index];
+	snprintf(ops->name, sizeof(ops->name), "%s", h->name);
+	ops->alloc = h->alloc;
+	ops->enqueue = h->enqueue;
+	ops->dequeue = h->dequeue;
+	ops->get_count = h->get_count;
+
+	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+
+	return ops_index;
+}
+
+/* wrapper to allocate an external mempool's private (pool) data. */
+int
+rte_mempool_ops_alloc(struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->alloc(mp);
+}
+
+/* wrapper to free an external pool ops. */
+void
+rte_mempool_ops_free(struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	if (ops->free == NULL)
+		return;
+	return ops->free(mp);
+}
+
+/* wrapper to get available objects in an external mempool. */
+unsigned int
+rte_mempool_ops_get_count(const struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->get_count(mp);
+}
+
+/* sets mempool ops previously registered by rte_mempool_ops_register. */
+int
+rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
+	void *pool_config)
+{
+	struct rte_mempool_ops *ops = NULL;
+	unsigned i;
+
+	/* too late, the mempool is already populated. */
+	if (mp->flags & MEMPOOL_F_POOL_CREATED)
+		return -EEXIST;
+
+	for (i = 0; i < rte_mempool_ops_table.num_ops; i++) {
+		if (!strcmp(name,
+				rte_mempool_ops_table.ops[i].name)) {
+			ops = &rte_mempool_ops_table.ops[i];
+			break;
+		}
+	}
+
+	if (ops == NULL)
+		return -EINVAL;
+
+	mp->ops_index = i;
+	mp->pool_config = pool_config;
+	return 0;
+}
diff --git a/lib/librte_mempool/rte_mempool_ring.c b/lib/librte_mempool/rte_mempool_ring.c
new file mode 100644
index 0000000..b9aa64d
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_ring.c
@@ -0,0 +1,161 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+
+static int
+common_ring_mp_enqueue(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	return rte_ring_mp_enqueue_bulk(mp->pool_data, obj_table, n);
+}
+
+static int
+common_ring_sp_enqueue(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	return rte_ring_sp_enqueue_bulk(mp->pool_data, obj_table, n);
+}
+
+static int
+common_ring_mc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	return rte_ring_mc_dequeue_bulk(mp->pool_data, obj_table, n);
+}
+
+static int
+common_ring_sc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	return rte_ring_sc_dequeue_bulk(mp->pool_data, obj_table, n);
+}
+
+static unsigned
+common_ring_get_count(const struct rte_mempool *mp)
+{
+	return rte_ring_count(mp->pool_data);
+}
+
+
+static int
+common_ring_alloc(struct rte_mempool *mp)
+{
+	int rg_flags = 0, ret;
+	char rg_name[RTE_RING_NAMESIZE];
+	struct rte_ring *r;
+
+	ret = snprintf(rg_name, sizeof(rg_name),
+		RTE_MEMPOOL_MZ_FORMAT, mp->name);
+	if (ret < 0 || ret >= (int)sizeof(rg_name)) {
+		rte_errno = ENAMETOOLONG;
+		return -rte_errno;
+	}
+
+	/* ring flags */
+	if (mp->flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (mp->flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	/*
+	 * Allocate the ring that will be used to store objects.
+	 * Ring functions will return appropriate errors if we are
+	 * running as a secondary process etc., so no checks made
+	 * in this function for that condition.
+	 */
+	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+		mp->socket_id, rg_flags);
+	if (r == NULL)
+		return -rte_errno;
+
+	mp->pool_data = r;
+
+	return 0;
+}
+
+static void
+common_ring_free(struct rte_mempool *mp)
+{
+	rte_ring_free(mp->pool_data);
+}
+
+/*
+ * The following 4 declarations of mempool ops structs address
+ * the need for the backward compatible mempool handlers for
+ * single/multi producers and single/multi consumers as dictated by the
+ * flags provided to the rte_mempool_create function
+ */
+static const struct rte_mempool_ops ops_mp_mc = {
+	.name = "ring_mp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_mp_enqueue,
+	.dequeue = common_ring_mc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_sp_sc = {
+	.name = "ring_sp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_sp_enqueue,
+	.dequeue = common_ring_sc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_mp_sc = {
+	.name = "ring_mp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_mp_enqueue,
+	.dequeue = common_ring_sc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_sp_mc = {
+	.name = "ring_sp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_sp_enqueue,
+	.dequeue = common_ring_mc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(ops_mp_mc);
+MEMPOOL_REGISTER_OPS(ops_sp_sc);
+MEMPOOL_REGISTER_OPS(ops_mp_sc);
+MEMPOOL_REGISTER_OPS(ops_sp_mc);
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index f63461b..6209ec2 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -20,15 +20,18 @@ DPDK_16.7 {
 	global:
 
 	rte_mempool_check_cookies;
-	rte_mempool_obj_iter;
-	rte_mempool_mem_iter;
 	rte_mempool_create_empty;
+	rte_mempool_free;
+	rte_mempool_mem_iter;
+	rte_mempool_obj_iter;
+	rte_mempool_ops_register;
+	rte_mempool_ops_table;
+	rte_mempool_populate_anon;
+	rte_mempool_populate_default;
 	rte_mempool_populate_phys;
 	rte_mempool_populate_phys_tab;
 	rte_mempool_populate_virt;
-	rte_mempool_populate_default;
-	rte_mempool_populate_anon;
-	rte_mempool_free;
+	rte_mempool_set_ops_byname;
 
 	local: *;
 } DPDK_2.0;
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v14 2/3] app/test: test mempool handler
  2016-06-17 13:53                         ` [PATCH v14 0/3] mempool: add mempool handler feature David Hunt
  2016-06-17 13:53                           ` [PATCH v14 1/3] mempool: support mempool handler operations David Hunt
@ 2016-06-17 13:53                           ` David Hunt
  2016-06-17 14:37                             ` Jan Viktorin
  2016-06-17 13:53                           ` [PATCH v14 3/3] mbuf: make default mempool ops configurable at build David Hunt
  2016-06-19 12:05                           ` [PATCH v15 0/3] mempool: add mempool handler feature David Hunt
  3 siblings, 1 reply; 238+ messages in thread
From: David Hunt @ 2016-06-17 13:53 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain, David Hunt

Create a minimal custom mempool handler and check that it
passes basic mempool autotests.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test/test_mempool.c | 122 +++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 120 insertions(+), 2 deletions(-)

diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index b586249..31582d8 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -83,6 +83,99 @@
 static rte_atomic32_t synchro;
 
 /*
+ * Simple example of custom mempool structure. Holds pointers to all the
+ * elements which are simply malloc'd in this example.
+ */
+struct custom_mempool {
+	rte_spinlock_t lock;
+	unsigned count;
+	unsigned size;
+	void *elts[];
+};
+
+/*
+ * Loop through all the element pointers and allocate a chunk of memory, then
+ * insert that memory into the ring.
+ */
+static int
+custom_mempool_alloc(struct rte_mempool *mp)
+{
+	struct custom_mempool *cm;
+
+	cm = rte_zmalloc("custom_mempool",
+		sizeof(struct custom_mempool) + mp->size * sizeof(void *), 0);
+	if (cm == NULL)
+		return -ENOMEM;
+
+	rte_spinlock_init(&cm->lock);
+	cm->count = 0;
+	cm->size = mp->size;
+	mp->pool_data = cm;
+	return 0;
+}
+
+static void
+custom_mempool_free(struct rte_mempool *mp)
+{
+	rte_free((void *)(mp->pool_data));
+}
+
+static int
+custom_mempool_enqueue(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (cm->count + n > cm->size) {
+		ret = -ENOBUFS;
+	} else {
+		memcpy(&cm->elts[cm->count], obj_table, sizeof(void *) * n);
+		cm->count += n;
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+
+static int
+custom_mempool_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (n > cm->count) {
+		ret = -ENOENT;
+	} else {
+		cm->count -= n;
+		memcpy(obj_table, &cm->elts[cm->count], sizeof(void *) * n);
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+static unsigned
+custom_mempool_get_count(const struct rte_mempool *mp)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+
+	return cm->count;
+}
+
+static struct rte_mempool_ops mempool_ops_custom = {
+	.name = "custom_handler",
+	.alloc = custom_mempool_alloc,
+	.free = custom_mempool_free,
+	.enqueue = custom_mempool_enqueue,
+	.dequeue = custom_mempool_dequeue,
+	.get_count = custom_mempool_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(mempool_ops_custom);
+
+/*
  * save the object number in the first 4 bytes of object data. All
  * other bytes are set to 0.
  */
@@ -292,12 +385,14 @@ static int test_mempool_single_consumer(void)
  * test function for mempool test based on singple consumer and single producer,
  * can run on one lcore only
  */
-static int test_mempool_launch_single_consumer(__attribute__((unused)) void *arg)
+static int
+test_mempool_launch_single_consumer(__attribute__((unused)) void *arg)
 {
 	return test_mempool_single_consumer();
 }
 
-static void my_mp_init(struct rte_mempool * mp, __attribute__((unused)) void * arg)
+static void
+my_mp_init(struct rte_mempool *mp, __attribute__((unused)) void *arg)
 {
 	printf("mempool name is %s\n", mp->name);
 	/* nothing to be implemented here*/
@@ -477,6 +572,7 @@ test_mempool(void)
 {
 	struct rte_mempool *mp_cache = NULL;
 	struct rte_mempool *mp_nocache = NULL;
+	struct rte_mempool *mp_ext = NULL;
 
 	rte_atomic32_init(&synchro);
 
@@ -505,6 +601,27 @@ test_mempool(void)
 		goto err;
 	}
 
+	/* create a mempool with an external handler */
+	mp_ext = rte_mempool_create_empty("test_ext",
+		MEMPOOL_SIZE,
+		MEMPOOL_ELT_SIZE,
+		RTE_MEMPOOL_CACHE_MAX_SIZE, 0,
+		SOCKET_ID_ANY, 0);
+
+	if (mp_ext == NULL) {
+		printf("cannot allocate mp_ext mempool\n");
+		goto err;
+	}
+	if (rte_mempool_set_ops_byname(mp_ext, "custom_handler", NULL) < 0) {
+		printf("cannot set custom handler\n");
+		goto err;
+	}
+	if (rte_mempool_populate_default(mp_ext) < 0) {
+		printf("cannot populate mp_ext mempool\n");
+		goto err;
+	}
+	rte_mempool_obj_iter(mp_ext, my_obj_init, NULL);
+
 	/* retrieve the mempool from its name */
 	if (rte_mempool_lookup("test_nocache") != mp_nocache) {
 		printf("Cannot lookup mempool from its name\n");
@@ -545,6 +662,7 @@ test_mempool(void)
 err:
 	rte_mempool_free(mp_nocache);
 	rte_mempool_free(mp_cache);
+	rte_mempool_free(mp_ext);
 	return -1;
 }
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v14 3/3] mbuf: make default mempool ops configurable at build
  2016-06-17 13:53                         ` [PATCH v14 0/3] mempool: add mempool handler feature David Hunt
  2016-06-17 13:53                           ` [PATCH v14 1/3] mempool: support mempool handler operations David Hunt
  2016-06-17 13:53                           ` [PATCH v14 2/3] app/test: test mempool handler David Hunt
@ 2016-06-17 13:53                           ` David Hunt
  2016-06-17 14:41                             ` Jan Viktorin
  2016-06-19 12:05                           ` [PATCH v15 0/3] mempool: add mempool handler feature David Hunt
  3 siblings, 1 reply; 238+ messages in thread
From: David Hunt @ 2016-06-17 13:53 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain, David Hunt

By default, the mempool ops used for mbuf allocations is a multi
producer and multi consumer ring. We could imagine a target (maybe some
network processors?) that provides an hardware-assisted pool
mechanism. In this case, the default configuration for this architecture
would contain a different value for RTE_MBUF_DEFAULT_MEMPOOL_OPS.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 config/common_base         |  1 +
 lib/librte_mbuf/rte_mbuf.c | 26 ++++++++++++++++++++++----
 2 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/config/common_base b/config/common_base
index 11ac81e..5f230db 100644
--- a/config/common_base
+++ b/config/common_base
@@ -394,6 +394,7 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
 #
 CONFIG_RTE_LIBRTE_MBUF=y
 CONFIG_RTE_LIBRTE_MBUF_DEBUG=n
+CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="ring_mp_mc"
 CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 2ece742..8cf5436 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -153,6 +153,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	unsigned cache_size, uint16_t priv_size, uint16_t data_room_size,
 	int socket_id)
 {
+	struct rte_mempool *mp;
 	struct rte_pktmbuf_pool_private mbp_priv;
 	unsigned elt_size;
 
@@ -167,10 +168,27 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	mbp_priv.mbuf_data_room_size = data_room_size;
 	mbp_priv.mbuf_priv_size = priv_size;
 
-	return rte_mempool_create(name, n, elt_size,
-		cache_size, sizeof(struct rte_pktmbuf_pool_private),
-		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
-		socket_id, 0);
+	mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
+		 sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
+	if (mp == NULL)
+		return NULL;
+
+	rte_errno = rte_mempool_set_ops_byname(mp,
+			RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL);
+	if (rte_errno != 0) {
+		RTE_LOG(ERR, MBUF, "error setting mempool handler\n");
+		return NULL;
+	}
+	rte_pktmbuf_pool_init(mp, &mbp_priv);
+
+	if (rte_mempool_populate_default(mp) < 0) {
+		rte_mempool_free(mp);
+		return NULL;
+	}
+
+	rte_mempool_obj_iter(mp, rte_pktmbuf_init, NULL);
+
+	return mp;
 }
 
 /* do some sanity checks on a mbuf: panic if it fails */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* Re: [PATCH v14 1/3] mempool: support mempool handler operations
  2016-06-17 13:53                           ` [PATCH v14 1/3] mempool: support mempool handler operations David Hunt
@ 2016-06-17 14:35                             ` Jan Viktorin
  2016-06-19 11:44                               ` Hunt, David
  0 siblings, 1 reply; 238+ messages in thread
From: Jan Viktorin @ 2016-06-17 14:35 UTC (permalink / raw)
  To: David Hunt; +Cc: dev, olivier.matz, jerin.jacob, shreyansh.jain

Hi David,

still few nits... Do you like the upstreaming process? :) I hope finish this
patchset soon. The major issues seem to be OK.

[...]

> +
> +/**
> + * @internal Get the mempool ops struct from its index.
> + *
> + * @param ops_index
> + *   The index of the ops struct in the ops struct table. It must be a valid
> + *   index: (0 <= idx < num_ops).
> + * @return
> + *   The pointer to the ops struct in the table.
> + */
> +static inline struct rte_mempool_ops *
> +rte_mempool_ops_get(int ops_index)

What about to rename the ops_get to get_ops to not confuse
with the ops wrappers? The thread on this topic has not
been finished. I think, we can live with this name, it's
just a bit weird...

Olivier? Thomas?

> +{
> +	RTE_VERIFY(ops_index < RTE_MEMPOOL_MAX_OPS_IDX);

According to the doc comment:

RTE_VERIFY(ops_index >= 0);

> +
> +	return &rte_mempool_ops_table.ops[ops_index];
> +}

[...]

> +
> +/**
> + * @internal Wrapper for mempool_ops get callback.

This comment is obsolete "get callback" vs. dequeue_bulk.

> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @param obj_table
> + *   Pointer to a table of void * pointers (objects).
> + * @param n
> + *   Number of objects to get.
> + * @return
> + *   - 0: Success; got n objects.
> + *   - <0: Error; code of get function.

"get function" seems to be obsolete, too...

> + */
> +static inline int
> +rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
> +		void **obj_table, unsigned n)
> +{
> +	struct rte_mempool_ops *ops;
> +
> +	ops = rte_mempool_ops_get(mp->ops_index);
> +	return ops->dequeue(mp, obj_table, n);
> +}
> +
> +/**
> + * @internal wrapper for mempool_ops put callback.

similar: "put callback"

> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @param obj_table
> + *   Pointer to a table of void * pointers (objects).
> + * @param n
> + *   Number of objects to put.
> + * @return
> + *   - 0: Success; n objects supplied.
> + *   - <0: Error; code of put function.

similar: "put function"

> + */
> +static inline int
> +rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
> +		unsigned n)
> +{
> +	struct rte_mempool_ops *ops;
> +
> +	ops = rte_mempool_ops_get(mp->ops_index);
> +	return ops->enqueue(mp, obj_table, n);
> +}
> +

[...]

> +
> +/* add a new ops struct in rte_mempool_ops_table, return its index. */
> +int
> +rte_mempool_ops_register(const struct rte_mempool_ops *h)
> +{
> +	struct rte_mempool_ops *ops;
> +	int16_t ops_index;
> +
> +	rte_spinlock_lock(&rte_mempool_ops_table.sl);
> +
> +	if (rte_mempool_ops_table.num_ops >=
> +			RTE_MEMPOOL_MAX_OPS_IDX) {
> +		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
> +		RTE_LOG(ERR, MEMPOOL,
> +			"Maximum number of mempool ops structs exceeded\n");
> +		return -ENOSPC;
> +	}
> +
> +	if (h->alloc == NULL || h->enqueue == NULL ||
> +			h->dequeue == NULL || h->get_count == NULL) {
> +		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
> +		RTE_LOG(ERR, MEMPOOL,
> +			"Missing callback while registering mempool ops\n");
> +		return -EINVAL;
> +	}
> +
> +	if (strlen(h->name) >= sizeof(ops->name) - 1) {
> +		RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n",
> +				__func__, h->name);

The unlock is missing here, isn't?

> +		rte_errno = EEXIST;
> +		return -EEXIST;
> +	}
> +
> +	ops_index = rte_mempool_ops_table.num_ops++;
> +	ops = &rte_mempool_ops_table.ops[ops_index];
> +	snprintf(ops->name, sizeof(ops->name), "%s", h->name);
> +	ops->alloc = h->alloc;
> +	ops->enqueue = h->enqueue;
> +	ops->dequeue = h->dequeue;
> +	ops->get_count = h->get_count;
> +
> +	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
> +
> +	return ops_index;
> +}
> +
> +/* wrapper to allocate an external mempool's private (pool) data. */
> +int
> +rte_mempool_ops_alloc(struct rte_mempool *mp)
> +{
> +	struct rte_mempool_ops *ops;
> +
> +	ops = rte_mempool_ops_get(mp->ops_index);
> +	return ops->alloc(mp);
> +}
> +
> +/* wrapper to free an external pool ops. */
> +void
> +rte_mempool_ops_free(struct rte_mempool *mp)
> +{
> +	struct rte_mempool_ops *ops;
> +
> +	ops = rte_mempool_ops_get(mp->ops_index);

I assume there would never be an rte_mempool_ops_unregister. Otherwise this function can
return NULL and may lead to a NULL pointer exception.

> +	if (ops->free == NULL)
> +		return;
> +	return ops->free(mp);

This return statement here is redundant (void).

> +}
> +
> +/* wrapper to get available objects in an external mempool. */
> +unsigned int
> +rte_mempool_ops_get_count(const struct rte_mempool *mp)

[...]

Regards
Jan

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v14 2/3] app/test: test mempool handler
  2016-06-17 13:53                           ` [PATCH v14 2/3] app/test: test mempool handler David Hunt
@ 2016-06-17 14:37                             ` Jan Viktorin
  0 siblings, 0 replies; 238+ messages in thread
From: Jan Viktorin @ 2016-06-17 14:37 UTC (permalink / raw)
  To: David Hunt; +Cc: dev, olivier.matz, jerin.jacob, shreyansh.jain

On Fri, 17 Jun 2016 14:53:37 +0100
David Hunt <david.hunt@intel.com> wrote:

> Create a minimal custom mempool handler and check that it
> passes basic mempool autotests.
> 
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: David Hunt <david.hunt@intel.com>
> Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> Acked-by: Olivier Matz <olivier.matz@6wind.com>
> ---
Reviewed-by: Jan Viktorin <viktorin@rehivetech.com>

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v14 3/3] mbuf: make default mempool ops configurable at build
  2016-06-17 13:53                           ` [PATCH v14 3/3] mbuf: make default mempool ops configurable at build David Hunt
@ 2016-06-17 14:41                             ` Jan Viktorin
  0 siblings, 0 replies; 238+ messages in thread
From: Jan Viktorin @ 2016-06-17 14:41 UTC (permalink / raw)
  To: David Hunt; +Cc: dev, olivier.matz, jerin.jacob, shreyansh.jain

On Fri, 17 Jun 2016 14:53:38 +0100
David Hunt <david.hunt@intel.com> wrote:

> By default, the mempool ops used for mbuf allocations is a multi
> producer and multi consumer ring. We could imagine a target (maybe some
> network processors?) that provides an hardware-assisted pool
> mechanism. In this case, the default configuration for this architecture
> would contain a different value for RTE_MBUF_DEFAULT_MEMPOOL_OPS.
> 
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: David Hunt <david.hunt@intel.com>
> Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> Acked-by: Olivier Matz <olivier.matz@6wind.com>
> ---
Reviewed-by: Jan Viktorin <viktorin@rehivetech.com>

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v14 1/3] mempool: support mempool handler operations
  2016-06-17 14:35                             ` Jan Viktorin
@ 2016-06-19 11:44                               ` Hunt, David
  0 siblings, 0 replies; 238+ messages in thread
From: Hunt, David @ 2016-06-19 11:44 UTC (permalink / raw)
  To: Jan Viktorin; +Cc: dev, olivier.matz, jerin.jacob, shreyansh.jain



On 17/6/2016 3:35 PM, Jan Viktorin wrote:
> Hi David,
>
> still few nits... Do you like the upstreaming process? :) I hope finish this
> patchset soon. The major issues seem to be OK.
>
> [...]
>
>> +
>> +/**
>> + * @internal Get the mempool ops struct from its index.
>> + *
>> + * @param ops_index
>> + *   The index of the ops struct in the ops struct table. It must be a valid
>> + *   index: (0 <= idx < num_ops).
>> + * @return
>> + *   The pointer to the ops struct in the table.
>> + */
>> +static inline struct rte_mempool_ops *
>> +rte_mempool_ops_get(int ops_index)
> What about to rename the ops_get to get_ops to not confuse
> with the ops wrappers? The thread on this topic has not
> been finished. I think, we can live with this name, it's
> just a bit weird...
>
> Olivier? Thomas?

I'll change it, if you think it's weird.

-rte_mempool_ops_get(int ops_index)
+rte_mempool_get_ops(int ops_index)


>> +{
>> +	RTE_VERIFY(ops_index < RTE_MEMPOOL_MAX_OPS_IDX);
> According to the doc comment:
>
> RTE_VERIFY(ops_index >= 0);

Fixed.

-       RTE_VERIFY(ops_index < RTE_MEMPOOL_MAX_OPS_IDX);
+       RTE_VERIFY((ops_index >= 0) && (ops_index < 
RTE_MEMPOOL_MAX_OPS_IDX));


>> +
>> +	return &rte_mempool_ops_table.ops[ops_index];
>> +}
> [...]
>
>> +
>> +/**
>> + * @internal Wrapper for mempool_ops get callback.
> This comment is obsolete "get callback" vs. dequeue_bulk.

Done.

- * @internal Wrapper for mempool_ops get callback.
+ * @internal Wrapper for mempool_ops dequeue callback.


>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + * @param obj_table
>> + *   Pointer to a table of void * pointers (objects).
>> + * @param n
>> + *   Number of objects to get.
>> + * @return
>> + *   - 0: Success; got n objects.
>> + *   - <0: Error; code of get function.
> "get function" seems to be obsolete, too...

Done.

- *   - <0: Error; code of get function.
+ *   - <0: Error; code of dequeue function.


>> + */
>> +static inline int
>> +rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
>> +		void **obj_table, unsigned n)
>> +{
>> +	struct rte_mempool_ops *ops;
>> +
>> +	ops = rte_mempool_ops_get(mp->ops_index);
>> +	return ops->dequeue(mp, obj_table, n);
>> +}
>> +
>> +/**
>> + * @internal wrapper for mempool_ops put callback.
> similar: "put callback"

Done.

- * @internal wrapper for mempool_ops put callback.
+ * @internal wrapper for mempool_ops enqueue callback.


>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + * @param obj_table
>> + *   Pointer to a table of void * pointers (objects).
>> + * @param n
>> + *   Number of objects to put.
>> + * @return
>> + *   - 0: Success; n objects supplied.
>> + *   - <0: Error; code of put function.
> similar: "put function"

Done.

- *   - <0: Error; code of put function.
+ *   - <0: Error; code of enqueue function.


>> + */
>> +static inline int
>> +rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
>> +		unsigned n)
>> +{
>> +	struct rte_mempool_ops *ops;
>> +
>> +	ops = rte_mempool_ops_get(mp->ops_index);
>> +	return ops->enqueue(mp, obj_table, n);
>> +}
>> +
> [...]
>
>> +
>> +/* add a new ops struct in rte_mempool_ops_table, return its index. */
>> +int
>> +rte_mempool_ops_register(const struct rte_mempool_ops *h)
>> +{
>> +	struct rte_mempool_ops *ops;
>> +	int16_t ops_index;
>> +
>> +	rte_spinlock_lock(&rte_mempool_ops_table.sl);
>> +
>> +	if (rte_mempool_ops_table.num_ops >=
>> +			RTE_MEMPOOL_MAX_OPS_IDX) {
>> +		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
>> +		RTE_LOG(ERR, MEMPOOL,
>> +			"Maximum number of mempool ops structs exceeded\n");
>> +		return -ENOSPC;
>> +	}
>> +
>> +	if (h->alloc == NULL || h->enqueue == NULL ||
>> +			h->dequeue == NULL || h->get_count == NULL) {
>> +		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
>> +		RTE_LOG(ERR, MEMPOOL,
>> +			"Missing callback while registering mempool ops\n");
>> +		return -EINVAL;
>> +	}
>> +
>> +	if (strlen(h->name) >= sizeof(ops->name) - 1) {
>> +		RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n",
>> +				__func__, h->name);
> The unlock is missing here, isn't?

Yes, well spotted. Fixed.

+               rte_spinlock_unlock(&rte_mempool_ops_table.sl);


>> +		rte_errno = EEXIST;
>> +		return -EEXIST;
>> +	}
>> +
>> +	ops_index = rte_mempool_ops_table.num_ops++;
>> +	ops = &rte_mempool_ops_table.ops[ops_index];
>> +	snprintf(ops->name, sizeof(ops->name), "%s", h->name);
>> +	ops->alloc = h->alloc;
>> +	ops->enqueue = h->enqueue;
>> +	ops->dequeue = h->dequeue;
>> +	ops->get_count = h->get_count;
>> +
>> +	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
>> +
>> +	return ops_index;
>> +}
>> +
>> +/* wrapper to allocate an external mempool's private (pool) data. */
>> +int
>> +rte_mempool_ops_alloc(struct rte_mempool *mp)
>> +{
>> +	struct rte_mempool_ops *ops;
>> +
>> +	ops = rte_mempool_ops_get(mp->ops_index);
>> +	return ops->alloc(mp);
>> +}
>> +
>> +/* wrapper to free an external pool ops. */
>> +void
>> +rte_mempool_ops_free(struct rte_mempool *mp)
>> +{
>> +	struct rte_mempool_ops *ops;
>> +
>> +	ops = rte_mempool_ops_get(mp->ops_index);
> I assume there would never be an rte_mempool_ops_unregister. Otherwise this function can
> return NULL and may lead to a NULL pointer exception.

I've put in a check for NULL.

>> +	if (ops->free == NULL)
>> +		return;
>> +	return ops->free(mp);
> This return statement here is redundant (void).

Removed.

>> +}
>> +
>> +/* wrapper to get available objects in an external mempool. */
>> +unsigned int
>> +rte_mempool_ops_get_count(const struct rte_mempool *mp)
> [...]
>
> Regards
> Jan

Regards,
David.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v15 0/3] mempool: add mempool handler feature
  2016-06-17 13:53                         ` [PATCH v14 0/3] mempool: add mempool handler feature David Hunt
                                             ` (2 preceding siblings ...)
  2016-06-17 13:53                           ` [PATCH v14 3/3] mbuf: make default mempool ops configurable at build David Hunt
@ 2016-06-19 12:05                           ` David Hunt
  2016-06-19 12:05                             ` [PATCH v15 1/3] mempool: support mempool handler operations David Hunt
                                               ` (4 more replies)
  3 siblings, 5 replies; 238+ messages in thread
From: David Hunt @ 2016-06-19 12:05 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain

Here's the latest version of the Mempool Handler feature patch set.

It's re-based on top of the latest head as of 19/6/2016, including
Olivier's 35-part patch series on mempool re-org [1]

[1] http://dpdk.org/ml/archives/dev/2016-May/039229.html

v15 changes:

 * Changed rte_mempool_ops_get() to rte_mempool_get_ops()
 * Did some minor tweaks to comments after the previous change of function
   names from put/get to enqueue/dequeue
 * Added missing spinlock_unlock in rte_mempool_ops_register()
 * Added check for null in ops_free
 * removed unneeded return statement

v14 changes:

 * set MEMPOOL_F_RING_CREATED flag after rte_mempool_ring_create() is called.
 * Changed name of feature from "external mempool manager" to "mempool handler"
   and updated comments and release notes accordingly.
 * Added a comment for newly added pool_config param in
   rte_mempool_set_ops_byname.

v13 changes:

 * Added in extra opaque data (pool_config) to mempool struct for mempool
   configuration by the ops functions. For example, this can be used to pass
  device names or device flags to the underlying alloc function.
 * Added mempool_config param to rte_mempool_set_ops_byname()

v12 changes:

 * Fixed a comment (function pram h -> ops)
 * fixed a typo (callbacki)

v11 changes:

 * Fixed comments (added '.' where needed for consistency)
 * removed ABI breakage notice for mempool manager in deprecation.rst
 * Added description of the external mempool manager functionality to
   doc/guides/prog_guide/mempool_lib.rst (John Mc reviewed)
 * renamed rte_mempool_default.c to rte_mempool_ring.c

v10 changes:

 * changed the _put/_get op names to _enqueue/_dequeue to be consistent
   with the function names
 * some rte_errno cleanup
 * comment tweaks about when to set pool_data
 * removed an un-needed check for ops->alloc == NULL

v9 changes:

 * added a check for NULL alloc in rte_mempool_ops_register
 * rte_mempool_alloc_t now returns int instead of void*
 * fixed some comment typo's
 * removed some unneeded typecasts
 * changed a return NULL to return -EEXIST in rte_mempool_ops_register
 * fixed rte_mempool_version.map file so builds ok as shared libs
 * moved flags check from rte_mempool_create_empty to rte_mempool_create

v8 changes:

 * merged first three patches in the series into one.
 * changed parameters to ops callback to all be rte_mempool pointer
   rather than than pointer to opaque data or uint64.
 * comment fixes.
 * fixed parameter to _free function (was inconsistent).
 * changed MEMPOOL_F_RING_CREATED to MEMPOOL_F_POOL_CREATED

v7 changes:

 * Changed rte_mempool_handler_table to rte_mempool_ops_table
 * Changed hander_idx to ops_index in rte_mempool struct
 * Reworked comments in rte_mempool.h around ops functions
 * Changed rte_mempool_hander.c to rte_mempool_ops.c
 * Changed all functions containing _handler_ to _ops_
 * Now there is no mention of 'handler' left
 * Other small changes out of review of mailing list

v6 changes:

 * Moved the flags handling from rte_mempool_create_empty to
   rte_mempool_create, as it's only there for backward compatibility
 * Various comment additions and cleanup
 * Renamed rte_mempool_handler to rte_mempool_ops
 * Added a union for *pool and u64 pool_id in struct rte_mempool
 * split the original patch into a few parts for easier review.
 * rename functions with _ext_ to _ops_.
 * addressed review comments
 * renamed put and get functions to enqueue and dequeue
 * changed occurences of rte_mempool_ops to const, as they
   contain function pointers (security)
 * split out the default external mempool handler into a separate
   patch for easier review

v5 changes:
 * rebasing, as it is dependent on another patch series [1]

v4 changes (Olivier Matz):
 * remove the rte_mempool_create_ext() function. To change the handler, the
   user has to do the following:
   - mp = rte_mempool_create_empty()
   - rte_mempool_set_handler(mp, "my_handler")
   - rte_mempool_populate_default(mp)
   This avoids to add another function with more than 10 arguments, duplicating
   the doxygen comments
 * change the api of rte_mempool_alloc_t: only the mempool pointer is required
   as all information is available in it
 * change the api of rte_mempool_free_t: remove return value
 * move inline wrapper functions from the .c to the .h (else they won't be
   inlined). This implies to have one header file (rte_mempool.h), or it
   would have generate cross dependencies issues.
 * remove now unused MEMPOOL_F_INT_HANDLER (note: it was misused anyway due
   to the use of && instead of &)
 * fix build in debug mode (__MEMPOOL_STAT_ADD(mp, put_pool, n) remaining)
 * fix build with shared libraries (global handler has to be declared in
   the .map file)
 * rationalize #include order
 * remove unused function rte_mempool_get_handler_name()
 * rename some structures, fields, functions
 * remove the static in front of rte_tailq_elem rte_mempool_tailq (comment
   from Yuanhan)
 * test the ext mempool handler in the same file than standard mempool tests,
   avoiding to duplicate the code
 * rework the custom handler in mempool_test
 * rework a bit the patch selecting default mbuf pool handler
 * fix some doxygen comments

v3 changes:
 * simplified the file layout, renamed to rte_mempool_handler.[hc]
 * moved the default handlers into rte_mempool_default.c
 * moved the example handler out into app/test/test_ext_mempool.c
 * removed is_mc/is_mp change, slight perf degredation on sp cached operation
 * removed stack hanler, may re-introduce at a later date
 * Changes out of code reviews

v2 changes:
 * There was a lot of duplicate code between rte_mempool_xmem_create and
   rte_mempool_create_ext. This has now been refactored and is now
   hopefully cleaner.
 * The RTE_NEXT_ABI define is now used to allow building of the library
   in a format that is compatible with binaries built against previous
   versions of DPDK.
 * Changes out of code reviews. Hopefully I've got most of them included.

The Mempool Handler feature is an extension to the mempool API that allows
users to add and use an alternative mempool handler, which allows
external memory subsystems such as external hardware memory management
systems and software based memory allocators to be used with DPDK.

The existing API to the internal DPDK mempool handler will remain unchanged
and will be backward compatible. However, there will be an ABI breakage, as
the mempool struct is changing.

There are two aspects to mempool handlers.
  1. Adding the code for your new mempool operations (ops). This is
     achieved by adding a new mempool ops source file into the
     librte_mempool library, and using the REGISTER_MEMPOOL_OPS macro.
  2. Using the new API to call rte_mempool_create_empty and
     rte_mempool_set_ops_byname to create a new mempool
     using the name parameter to identify which ops to use.

New API calls added
 1. A new rte_mempool_create_empty() function
 2. rte_mempool_set_ops_byname() which sets the mempools ops (functions)
 3. An rte_mempool_populate_default() and rte_mempool_populate_anon() functions
    which populates the mempool using the relevant ops

Several mempool handlers may be used in the same application. A new
mempool can then be created by using the new rte_mempool_create_empty function,
then calling rte_mempool_set_ops_byname to point the mempool to the relevant
mempool handler callback (ops) structure.

Legacy applications will continue to use the old rte_mempool_create API call,
which uses a ring based mempool handler by default. These applications
will need to be modified to use a new mempool handler.

A mempool handler needs to provide the following functions.
 1. alloc     - allocates the mempool memory, and adds each object onto a ring
 2. enqueue   - puts an object back into the mempool once an application has
                finished with it
 3. dequeue   - gets an object from the mempool for use by the application
 4. get_count - gets the number of available objects in the mempool
 5. free      - frees the mempool memory

Every time an enqueue/dequeue/get_count is called from the application/PMD,
the callback for that mempool is called. These functions are in the fastpath,
and any unoptimised ops may limit performance.

The new APIs are as follows:

1. rte_mempool_create_empty

struct rte_mempool *
rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
    unsigned cache_size, unsigned private_data_size,
    int socket_id, unsigned flags);

2. rte_mempool_set_ops_byname()

int
rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name
    void *pool_config);

3. rte_mempool_populate_default()

int rte_mempool_populate_default(struct rte_mempool *mp);

4. rte_mempool_populate_anon()

int rte_mempool_populate_anon(struct rte_mempool *mp);

Please see rte_mempool.h for further information on the parameters.


The important thing to note is that the mempool ops struct is passed by name
to rte_mempool_set_ops_byname, which looks through the ops struct array to
get the ops_index, which is then stored in the rte_memool structure. This
allow multiple processes to use the same mempool, as the function pointers
are accessed via ops index.

The mempool ops structure contains callbacks to the implementation of
the ops function, and is set up for registration as follows:

static const struct rte_mempool_ops ops_sp_mc = {
    .name = "ring_sp_mc",
    .alloc = rte_mempool_common_ring_alloc,
    .enqueue = common_ring_sp_enqueue,
    .dequeue = common_ring_mc_dequeue,
    .get_count = common_ring_get_count,
    .free = common_ring_free,
};

And then the following macro will register the ops in the array of ops
structures

REGISTER_MEMPOOL_OPS(ops_mp_mc);

For an example of API usage, please see app/test/test_mempool.c, which
implements a rudimentary "custom_handler" mempool handler using simple mallocs
for each mempool object. This file also contains the callbacks and self
registration for the new handler.

David Hunt (2):
  mempool: support mempool handler operations
  mbuf: make default mempool ops configurable at build

Olivier Matz (1):
  app/test: test mempool handler

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v15 1/3] mempool: support mempool handler operations
  2016-06-19 12:05                           ` [PATCH v15 0/3] mempool: add mempool handler feature David Hunt
@ 2016-06-19 12:05                             ` David Hunt
  2016-06-19 12:05                             ` [PATCH v15 2/3] app/test: test mempool handler David Hunt
                                               ` (3 subsequent siblings)
  4 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-06-19 12:05 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain, David Hunt

Until now, the objects stored in a mempool were internally stored in a
ring. This patch introduces the possibility to register external handlers
replacing the ring.

The default behavior remains unchanged, but calling the new function
rte_mempool_set_ops_byname() right after rte_mempool_create_empty() allows
the user to change the handler that will be used when populating
the mempool.

This patch also adds a set of default ops (function callbacks) based
on rte_ring.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test/test_mempool_perf.c               |   1 -
 doc/guides/prog_guide/mempool_lib.rst      |  32 +++-
 doc/guides/rel_notes/deprecation.rst       |   9 -
 lib/librte_mempool/Makefile                |   2 +
 lib/librte_mempool/rte_mempool.c           |  67 +++-----
 lib/librte_mempool/rte_mempool.h           | 255 ++++++++++++++++++++++++++---
 lib/librte_mempool/rte_mempool_ops.c       | 150 +++++++++++++++++
 lib/librte_mempool/rte_mempool_ring.c      | 161 ++++++++++++++++++
 lib/librte_mempool/rte_mempool_version.map |  13 +-
 9 files changed, 609 insertions(+), 81 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_ops.c
 create mode 100644 lib/librte_mempool/rte_mempool_ring.c

diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index c5e3576..c5f8455 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
 							   n_get_bulk);
 				if (unlikely(ret < 0)) {
 					rte_mempool_dump(stdout, mp);
-					rte_ring_dump(stdout, mp->ring);
 					/* in this case, objects are lost... */
 					return -1;
 				}
diff --git a/doc/guides/prog_guide/mempool_lib.rst b/doc/guides/prog_guide/mempool_lib.rst
index c3afc2e..1943fc4 100644
--- a/doc/guides/prog_guide/mempool_lib.rst
+++ b/doc/guides/prog_guide/mempool_lib.rst
@@ -34,7 +34,8 @@ Mempool Library
 ===============
 
 A memory pool is an allocator of a fixed-sized object.
-In the DPDK, it is identified by name and uses a ring to store free objects.
+In the DPDK, it is identified by name and uses a mempool handler to store free objects.
+The default mempool handler is ring based.
 It provides some other optional services such as a per-core object cache and
 an alignment helper to ensure that objects are padded to spread them equally on all DRAM or DDR3 channels.
 
@@ -127,6 +128,35 @@ The maximum size of the cache is static and is defined at compilation time (CONF
    A mempool in Memory with its Associated Ring
 
 
+Mempool Handlers
+------------------------
+
+This allows external memory subsystems, such as external hardware memory
+management systems and software based memory allocators, to be used with DPDK.
+
+There are two aspects to a mempool handler.
+
+* Adding the code for your new mempool operations (ops). This is achieved by
+  adding a new mempool ops code, and using the ``REGISTER_MEMPOOL_OPS`` macro.
+
+* Using the new API to call ``rte_mempool_create_empty()`` and
+  ``rte_mempool_set_ops_byname()`` to create a new mempool and specifying which
+  ops to use.
+
+Several different mempool handlers may be used in the same application. A new
+mempool can be created by using the ``rte_mempool_create_empty()`` function,
+then using ``rte_mempool_set_ops_byname()`` to point the mempool to the
+relevant mempool handler callback (ops) structure.
+
+Legacy applications may continue to use the old ``rte_mempool_create()`` API
+call, which uses a ring based mempool handler by default. These applications
+will need to be modified to use a new mempool handler.
+
+For applications that use ``rte_pktmbuf_create()``, there is a config setting
+(``RTE_MBUF_DEFAULT_MEMPOOL_OPS``) that allows the application to make use of
+an alternative mempool handler.
+
+
 Use Cases
 ---------
 
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index f75183f..3cbc19e 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -34,15 +34,6 @@ Deprecation Notices
   compact API. The ones that remain are backwards compatible and use the
   per-lcore default cache if available. This change targets release 16.07.
 
-* The rte_mempool struct will be changed in 16.07 to facilitate the new
-  external mempool manager functionality.
-  The ring element will be replaced with a more generic 'pool' opaque pointer
-  to allow new mempool handlers to use their own user-defined mempool
-  layout. Also newly added to rte_mempool is a handler index.
-  The existing API will be backward compatible, but there will be new API
-  functions added to facilitate the creation of mempools using an external
-  handler. The 16.07 release will contain these changes.
-
 * A librte_vhost public structures refactor is planned for DPDK 16.07
   that requires both ABI and API change.
   The proposed refactor would expose DPDK vhost dev to applications as
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 43423e0..a4c089e 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -42,6 +42,8 @@ LIBABIVER := 2
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ring.c
 # install includes
 SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
 
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index af71edd..e6a83d0 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -148,7 +148,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, phys_addr_t physaddr)
 #endif
 
 	/* enqueue in ring */
-	rte_ring_sp_enqueue(mp->ring, obj);
+	rte_mempool_ops_enqueue_bulk(mp, &obj, 1);
 }
 
 /* call obj_cb() for each mempool element */
@@ -303,40 +303,6 @@ rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	return (size_t)paddr_idx << pg_shift;
 }
 
-/* create the internal ring */
-static int
-rte_mempool_ring_create(struct rte_mempool *mp)
-{
-	int rg_flags = 0, ret;
-	char rg_name[RTE_RING_NAMESIZE];
-	struct rte_ring *r;
-
-	ret = snprintf(rg_name, sizeof(rg_name),
-		RTE_MEMPOOL_MZ_FORMAT, mp->name);
-	if (ret < 0 || ret >= (int)sizeof(rg_name))
-		return -ENAMETOOLONG;
-
-	/* ring flags */
-	if (mp->flags & MEMPOOL_F_SP_PUT)
-		rg_flags |= RING_F_SP_ENQ;
-	if (mp->flags & MEMPOOL_F_SC_GET)
-		rg_flags |= RING_F_SC_DEQ;
-
-	/* Allocate the ring that will be used to store objects.
-	 * Ring functions will return appropriate errors if we are
-	 * running as a secondary process etc., so no checks made
-	 * in this function for that condition.
-	 */
-	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
-		mp->socket_id, rg_flags);
-	if (r == NULL)
-		return -rte_errno;
-
-	mp->ring = r;
-	mp->flags |= MEMPOOL_F_RING_CREATED;
-	return 0;
-}
-
 /* free a memchunk allocated with rte_memzone_reserve() */
 static void
 rte_mempool_memchunk_mz_free(__rte_unused struct rte_mempool_memhdr *memhdr,
@@ -354,7 +320,7 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
 	void *elt;
 
 	while (!STAILQ_EMPTY(&mp->elt_list)) {
-		rte_ring_sc_dequeue(mp->ring, &elt);
+		rte_mempool_ops_dequeue_bulk(mp, &elt, 1);
 		(void)elt;
 		STAILQ_REMOVE_HEAD(&mp->elt_list, next);
 		mp->populated_size--;
@@ -386,10 +352,11 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	int ret;
 
 	/* create the internal ring if not already done */
-	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
-		ret = rte_mempool_ring_create(mp);
-		if (ret < 0)
+	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
+		ret = rte_mempool_ops_alloc(mp);
+		if (ret != 0)
 			return ret;
+		mp->flags |= MEMPOOL_F_POOL_CREATED;
 	}
 
 	/* mempool is already populated */
@@ -703,7 +670,7 @@ rte_mempool_free(struct rte_mempool *mp)
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
 
 	rte_mempool_free_memchunks(mp);
-	rte_ring_free(mp->ring);
+	rte_mempool_ops_free(mp);
 	rte_memzone_free(mp->mz);
 }
 
@@ -815,6 +782,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 		RTE_PTR_ADD(mp, MEMPOOL_HEADER_SIZE(mp, 0));
 
 	te->data = mp;
+
 	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
 	TAILQ_INSERT_TAIL(mempool_list, te, next);
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
@@ -844,6 +812,19 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
 	if (mp == NULL)
 		return NULL;
 
+	/*
+	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
+	 * set the correct index into the table of ops structs.
+	 */
+	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
+		rte_mempool_set_ops_byname(mp, "ring_sp_sc", NULL);
+	else if (flags & MEMPOOL_F_SP_PUT)
+		rte_mempool_set_ops_byname(mp, "ring_sp_mc", NULL);
+	else if (flags & MEMPOOL_F_SC_GET)
+		rte_mempool_set_ops_byname(mp, "ring_mp_sc", NULL);
+	else
+		rte_mempool_set_ops_byname(mp, "ring_mp_mc", NULL);
+
 	/* call the mempool priv initializer */
 	if (mp_init)
 		mp_init(mp, mp_init_arg);
@@ -930,7 +911,7 @@ rte_mempool_count(const struct rte_mempool *mp)
 	unsigned count;
 	unsigned lcore_id;
 
-	count = rte_ring_count(mp->ring);
+	count = rte_mempool_ops_get_count(mp);
 
 	if (mp->cache_size == 0)
 		return count;
@@ -1119,7 +1100,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 
 	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
 	fprintf(f, "  flags=%x\n", mp->flags);
-	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
+	fprintf(f, "  pool=%p\n", mp->pool_data);
 	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->mz->phys_addr);
 	fprintf(f, "  nb_mem_chunks=%u\n", mp->nb_mem_chunks);
 	fprintf(f, "  size=%"PRIu32"\n", mp->size);
@@ -1140,7 +1121,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 	}
 
 	cache_count = rte_mempool_dump_cache(f, mp);
-	common_count = rte_ring_count(mp->ring);
+	common_count = rte_mempool_ops_get_count(mp);
 	if ((cache_count + common_count) > mp->size)
 		common_count = mp->size - cache_count;
 	fprintf(f, "  common_pool_count=%u\n", common_count);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 60339bd..2d7c980 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -67,6 +67,7 @@
 #include <inttypes.h>
 #include <sys/queue.h>
 
+#include <rte_spinlock.h>
 #include <rte_log.h>
 #include <rte_debug.h>
 #include <rte_lcore.h>
@@ -203,10 +204,14 @@ struct rte_mempool_memhdr {
  */
 struct rte_mempool {
 	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
-	struct rte_ring *ring;           /**< Ring to store objects. */
-	const struct rte_memzone *mz;    /**< Memzone where pool is allocated */
+	union {
+		void *pool_data;         /**< Ring or pool to store objects. */
+		uint64_t pool_id;        /**< External mempool identifier. */
+	};
+	void *pool_config;               /**< optional args for ops alloc. */
+	const struct rte_memzone *mz;    /**< Memzone where pool is alloc'd. */
 	int flags;                       /**< Flags of the mempool. */
-	int socket_id;                   /**< Socket id passed at mempool creation. */
+	int socket_id;                   /**< Socket id passed at create. */
 	uint32_t size;                   /**< Max size of the mempool. */
 	uint32_t cache_size;             /**< Size of per-lcore local cache. */
 	uint32_t cache_flushthresh;
@@ -217,6 +222,14 @@ struct rte_mempool {
 	uint32_t trailer_size;           /**< Size of trailer (after elt). */
 
 	unsigned private_data_size;      /**< Size of private data. */
+	/**
+	 * Index into rte_mempool_ops_table array of mempool ops
+	 * structs, which contain callback function pointers.
+	 * We're using an index here rather than pointers to the callbacks
+	 * to facilitate any secondary processes that may want to use
+	 * this mempool.
+	 */
+	int32_t ops_index;
 
 	struct rte_mempool_cache *local_cache; /**< Per-lcore local cache */
 
@@ -235,7 +248,7 @@ struct rte_mempool {
 #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
 #define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
-#define MEMPOOL_F_RING_CREATED   0x0010 /**< Internal: ring is created */
+#define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
 
 /**
@@ -325,6 +338,215 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 #define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
 #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
 
+#define RTE_MEMPOOL_OPS_NAMESIZE 32 /**< Max length of ops struct name. */
+
+/**
+ * Prototype for implementation specific data provisioning function.
+ *
+ * The function should provide the implementation specific memory for
+ * for use by the other mempool ops functions in a given mempool ops struct.
+ * E.g. the default ops provides an instance of the rte_ring for this purpose.
+ * it will most likely point to a different type of data structure, and
+ * will be transparent to the application programmer.
+ * This function should set mp->pool_data.
+ */
+typedef int (*rte_mempool_alloc_t)(struct rte_mempool *mp);
+
+/**
+ * Free the opaque private data pointed to by mp->pool_data pointer.
+ */
+typedef void (*rte_mempool_free_t)(struct rte_mempool *mp);
+
+/**
+ * Enqueue an object into the external pool.
+ */
+typedef int (*rte_mempool_enqueue_t)(struct rte_mempool *mp,
+		void * const *obj_table, unsigned int n);
+
+/**
+ * Dequeue an object from the external pool.
+ */
+typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
+		void **obj_table, unsigned int n);
+
+/**
+ * Return the number of available objects in the external pool.
+ */
+typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
+
+/** Structure defining mempool operations structure */
+struct rte_mempool_ops {
+	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
+	rte_mempool_alloc_t alloc;       /**< Allocate private data. */
+	rte_mempool_free_t free;         /**< Free the external pool. */
+	rte_mempool_enqueue_t enqueue;   /**< Enqueue an object. */
+	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
+	rte_mempool_get_count get_count; /**< Get qty of available objs. */
+} __rte_cache_aligned;
+
+#define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
+
+/**
+ * Structure storing the table of registered ops structs, each of which contain
+ * the function pointers for the mempool ops functions.
+ * Each process has its own storage for this ops struct array so that
+ * the mempools can be shared across primary and secondary processes.
+ * The indices used to access the array are valid across processes, whereas
+ * any function pointers stored directly in the mempool struct would not be.
+ * This results in us simply having "ops_index" in the mempool struct.
+ */
+struct rte_mempool_ops_table {
+	rte_spinlock_t sl;     /**< Spinlock for add/delete. */
+	uint32_t num_ops;      /**< Number of used ops structs in the table. */
+	/**
+	 * Storage for all possible ops structs.
+	 */
+	struct rte_mempool_ops ops[RTE_MEMPOOL_MAX_OPS_IDX];
+} __rte_cache_aligned;
+
+/** Array of registered ops structs. */
+extern struct rte_mempool_ops_table rte_mempool_ops_table;
+
+/**
+ * @internal Get the mempool ops struct from its index.
+ *
+ * @param ops_index
+ *   The index of the ops struct in the ops struct table. It must be a valid
+ *   index: (0 <= idx < num_ops).
+ * @return
+ *   The pointer to the ops struct in the table.
+ */
+static inline struct rte_mempool_ops *
+rte_mempool_ops_get(int ops_index)
+{
+	RTE_VERIFY(ops_index < RTE_MEMPOOL_MAX_OPS_IDX);
+
+	return &rte_mempool_ops_table.ops[ops_index];
+}
+
+/**
+ * @internal Wrapper for mempool_ops alloc callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   - 0: Success; successfully allocated mempool pool_data.
+ *   - <0: Error; code of alloc function.
+ */
+int
+rte_mempool_ops_alloc(struct rte_mempool *mp);
+
+/**
+ * @internal Wrapper for mempool_ops get callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to get.
+ * @return
+ *   - 0: Success; got n objects.
+ *   - <0: Error; code of get function.
+ */
+static inline int
+rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
+		void **obj_table, unsigned n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->dequeue(mp, obj_table, n);
+}
+
+/**
+ * @internal wrapper for mempool_ops put callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to put.
+ * @return
+ *   - 0: Success; n objects supplied.
+ *   - <0: Error; code of put function.
+ */
+static inline int
+rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->enqueue(mp, obj_table, n);
+}
+
+/**
+ * @internal wrapper for mempool_ops get_count callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   The number of available objects in the external pool.
+ */
+unsigned
+rte_mempool_ops_get_count(const struct rte_mempool *mp);
+
+/**
+ * @internal wrapper for mempool_ops free callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ */
+void
+rte_mempool_ops_free(struct rte_mempool *mp);
+
+/**
+ * Set the ops of a mempool.
+ *
+ * This can only be done on a mempool that is not populated, i.e. just after
+ * a call to rte_mempool_create_empty().
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param name
+ *   Name of the ops structure to use for this mempool.
+ * @param pool_config
+ *   Opaque data that can be passed by the application to the ops functions.
+ * @return
+ *   - 0: Success; the mempool is now using the requested ops functions.
+ *   - -EINVAL - Invalid ops struct name provided.
+ *   - -EEXIST - mempool already has an ops struct assigned.
+ */
+int
+rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
+		void *pool_config);
+
+/**
+ * Register mempool operations.
+ *
+ * @param ops
+ *   Pointer to an ops structure to register.
+ * @return
+ *   - >=0: Success; return the index of the ops struct in the table.
+ *   - -EINVAL - some missing callbacks while registering ops struct.
+ *   - -ENOSPC - the maximum number of ops structs has been reached.
+ */
+int rte_mempool_ops_register(const struct rte_mempool_ops *ops);
+
+/**
+ * Macro to statically register the ops of a mempool handler.
+ * Note that the rte_mempool_ops_register fails silently here when
+ * more then RTE_MEMPOOL_MAX_OPS_IDX is registered.
+ */
+#define MEMPOOL_REGISTER_OPS(ops)					\
+	void mp_hdlr_init_##ops(void);					\
+	void __attribute__((constructor, used)) mp_hdlr_init_##ops(void)\
+	{								\
+		rte_mempool_ops_register(&ops);			\
+	}
+
 /**
  * An object callback function for mempool.
  *
@@ -774,7 +996,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 	cache->len += n;
 
 	if (cache->len >= flushthresh) {
-		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
+		rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache_size],
 				cache->len - cache_size);
 		cache->len = cache_size;
 	}
@@ -785,19 +1007,10 @@ ring_enqueue:
 
 	/* push remaining objects in ring */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	if (is_mp) {
-		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
-	else {
-		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
+	if (rte_mempool_ops_enqueue_bulk(mp, obj_table, n) < 0)
+		rte_panic("cannot put objects in mempool\n");
 #else
-	if (is_mp)
-		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
-	else
-		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
+	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);
 #endif
 }
 
@@ -945,7 +1158,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 		uint32_t req = n + (cache_size - cache->len);
 
 		/* How many do we require i.e. number to fill the cache + the request */
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
+		ret = rte_mempool_ops_dequeue_bulk(mp,
+			&cache->objs[cache->len], req);
 		if (unlikely(ret < 0)) {
 			/*
 			 * In the offchance that we are buffer constrained,
@@ -972,10 +1186,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 ring_dequeue:
 
 	/* get remaining objects from ring */
-	if (is_mc)
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
-	else
-		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
+	ret = rte_mempool_ops_dequeue_bulk(mp, obj_table, n);
 
 	if (ret < 0)
 		__MEMPOOL_STAT_ADD(mp, get_fail, n);
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
new file mode 100644
index 0000000..7977a14
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -0,0 +1,150 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2016 6WIND S.A.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_mempool.h>
+#include <rte_errno.h>
+
+/* indirect jump table to support external memory pools. */
+struct rte_mempool_ops_table rte_mempool_ops_table = {
+	.sl =  RTE_SPINLOCK_INITIALIZER,
+	.num_ops = 0
+};
+
+/* add a new ops struct in rte_mempool_ops_table, return its index. */
+int
+rte_mempool_ops_register(const struct rte_mempool_ops *h)
+{
+	struct rte_mempool_ops *ops;
+	int16_t ops_index;
+
+	rte_spinlock_lock(&rte_mempool_ops_table.sl);
+
+	if (rte_mempool_ops_table.num_ops >=
+			RTE_MEMPOOL_MAX_OPS_IDX) {
+		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Maximum number of mempool ops structs exceeded\n");
+		return -ENOSPC;
+	}
+
+	if (h->alloc == NULL || h->enqueue == NULL ||
+			h->dequeue == NULL || h->get_count == NULL) {
+		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Missing callback while registering mempool ops\n");
+		return -EINVAL;
+	}
+
+	if (strlen(h->name) >= sizeof(ops->name) - 1) {
+		RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n",
+				__func__, h->name);
+		rte_errno = EEXIST;
+		return -EEXIST;
+	}
+
+	ops_index = rte_mempool_ops_table.num_ops++;
+	ops = &rte_mempool_ops_table.ops[ops_index];
+	snprintf(ops->name, sizeof(ops->name), "%s", h->name);
+	ops->alloc = h->alloc;
+	ops->enqueue = h->enqueue;
+	ops->dequeue = h->dequeue;
+	ops->get_count = h->get_count;
+
+	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+
+	return ops_index;
+}
+
+/* wrapper to allocate an external mempool's private (pool) data. */
+int
+rte_mempool_ops_alloc(struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->alloc(mp);
+}
+
+/* wrapper to free an external pool ops. */
+void
+rte_mempool_ops_free(struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	if (ops->free == NULL)
+		return;
+	return ops->free(mp);
+}
+
+/* wrapper to get available objects in an external mempool. */
+unsigned int
+rte_mempool_ops_get_count(const struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_ops_get(mp->ops_index);
+	return ops->get_count(mp);
+}
+
+/* sets mempool ops previously registered by rte_mempool_ops_register. */
+int
+rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
+	void *pool_config)
+{
+	struct rte_mempool_ops *ops = NULL;
+	unsigned i;
+
+	/* too late, the mempool is already populated. */
+	if (mp->flags & MEMPOOL_F_POOL_CREATED)
+		return -EEXIST;
+
+	for (i = 0; i < rte_mempool_ops_table.num_ops; i++) {
+		if (!strcmp(name,
+				rte_mempool_ops_table.ops[i].name)) {
+			ops = &rte_mempool_ops_table.ops[i];
+			break;
+		}
+	}
+
+	if (ops == NULL)
+		return -EINVAL;
+
+	mp->ops_index = i;
+	mp->pool_config = pool_config;
+	return 0;
+}
diff --git a/lib/librte_mempool/rte_mempool_ring.c b/lib/librte_mempool/rte_mempool_ring.c
new file mode 100644
index 0000000..b9aa64d
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_ring.c
@@ -0,0 +1,161 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+
+static int
+common_ring_mp_enqueue(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	return rte_ring_mp_enqueue_bulk(mp->pool_data, obj_table, n);
+}
+
+static int
+common_ring_sp_enqueue(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	return rte_ring_sp_enqueue_bulk(mp->pool_data, obj_table, n);
+}
+
+static int
+common_ring_mc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	return rte_ring_mc_dequeue_bulk(mp->pool_data, obj_table, n);
+}
+
+static int
+common_ring_sc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	return rte_ring_sc_dequeue_bulk(mp->pool_data, obj_table, n);
+}
+
+static unsigned
+common_ring_get_count(const struct rte_mempool *mp)
+{
+	return rte_ring_count(mp->pool_data);
+}
+
+
+static int
+common_ring_alloc(struct rte_mempool *mp)
+{
+	int rg_flags = 0, ret;
+	char rg_name[RTE_RING_NAMESIZE];
+	struct rte_ring *r;
+
+	ret = snprintf(rg_name, sizeof(rg_name),
+		RTE_MEMPOOL_MZ_FORMAT, mp->name);
+	if (ret < 0 || ret >= (int)sizeof(rg_name)) {
+		rte_errno = ENAMETOOLONG;
+		return -rte_errno;
+	}
+
+	/* ring flags */
+	if (mp->flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (mp->flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	/*
+	 * Allocate the ring that will be used to store objects.
+	 * Ring functions will return appropriate errors if we are
+	 * running as a secondary process etc., so no checks made
+	 * in this function for that condition.
+	 */
+	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+		mp->socket_id, rg_flags);
+	if (r == NULL)
+		return -rte_errno;
+
+	mp->pool_data = r;
+
+	return 0;
+}
+
+static void
+common_ring_free(struct rte_mempool *mp)
+{
+	rte_ring_free(mp->pool_data);
+}
+
+/*
+ * The following 4 declarations of mempool ops structs address
+ * the need for the backward compatible mempool handlers for
+ * single/multi producers and single/multi consumers as dictated by the
+ * flags provided to the rte_mempool_create function
+ */
+static const struct rte_mempool_ops ops_mp_mc = {
+	.name = "ring_mp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_mp_enqueue,
+	.dequeue = common_ring_mc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_sp_sc = {
+	.name = "ring_sp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_sp_enqueue,
+	.dequeue = common_ring_sc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_mp_sc = {
+	.name = "ring_mp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_mp_enqueue,
+	.dequeue = common_ring_sc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_sp_mc = {
+	.name = "ring_sp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_sp_enqueue,
+	.dequeue = common_ring_mc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(ops_mp_mc);
+MEMPOOL_REGISTER_OPS(ops_sp_sc);
+MEMPOOL_REGISTER_OPS(ops_mp_sc);
+MEMPOOL_REGISTER_OPS(ops_sp_mc);
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index f63461b..6209ec2 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -20,15 +20,18 @@ DPDK_16.7 {
 	global:
 
 	rte_mempool_check_cookies;
-	rte_mempool_obj_iter;
-	rte_mempool_mem_iter;
 	rte_mempool_create_empty;
+	rte_mempool_free;
+	rte_mempool_mem_iter;
+	rte_mempool_obj_iter;
+	rte_mempool_ops_register;
+	rte_mempool_ops_table;
+	rte_mempool_populate_anon;
+	rte_mempool_populate_default;
 	rte_mempool_populate_phys;
 	rte_mempool_populate_phys_tab;
 	rte_mempool_populate_virt;
-	rte_mempool_populate_default;
-	rte_mempool_populate_anon;
-	rte_mempool_free;
+	rte_mempool_set_ops_byname;
 
 	local: *;
 } DPDK_2.0;
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v15 2/3] app/test: test mempool handler
  2016-06-19 12:05                           ` [PATCH v15 0/3] mempool: add mempool handler feature David Hunt
  2016-06-19 12:05                             ` [PATCH v15 1/3] mempool: support mempool handler operations David Hunt
@ 2016-06-19 12:05                             ` David Hunt
  2016-06-19 12:05                             ` [PATCH v15 3/3] mbuf: make default mempool ops configurable at build David Hunt
                                               ` (2 subsequent siblings)
  4 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-06-19 12:05 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain, David Hunt

Create a minimal custom mempool handler and check that it
passes basic mempool autotests.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Reviewed-by: Jan Viktorin <viktorin@rehivetech.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test/test_mempool.c | 122 +++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 120 insertions(+), 2 deletions(-)

diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index b586249..31582d8 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -83,6 +83,99 @@
 static rte_atomic32_t synchro;
 
 /*
+ * Simple example of custom mempool structure. Holds pointers to all the
+ * elements which are simply malloc'd in this example.
+ */
+struct custom_mempool {
+	rte_spinlock_t lock;
+	unsigned count;
+	unsigned size;
+	void *elts[];
+};
+
+/*
+ * Loop through all the element pointers and allocate a chunk of memory, then
+ * insert that memory into the ring.
+ */
+static int
+custom_mempool_alloc(struct rte_mempool *mp)
+{
+	struct custom_mempool *cm;
+
+	cm = rte_zmalloc("custom_mempool",
+		sizeof(struct custom_mempool) + mp->size * sizeof(void *), 0);
+	if (cm == NULL)
+		return -ENOMEM;
+
+	rte_spinlock_init(&cm->lock);
+	cm->count = 0;
+	cm->size = mp->size;
+	mp->pool_data = cm;
+	return 0;
+}
+
+static void
+custom_mempool_free(struct rte_mempool *mp)
+{
+	rte_free((void *)(mp->pool_data));
+}
+
+static int
+custom_mempool_enqueue(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (cm->count + n > cm->size) {
+		ret = -ENOBUFS;
+	} else {
+		memcpy(&cm->elts[cm->count], obj_table, sizeof(void *) * n);
+		cm->count += n;
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+
+static int
+custom_mempool_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (n > cm->count) {
+		ret = -ENOENT;
+	} else {
+		cm->count -= n;
+		memcpy(obj_table, &cm->elts[cm->count], sizeof(void *) * n);
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+static unsigned
+custom_mempool_get_count(const struct rte_mempool *mp)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+
+	return cm->count;
+}
+
+static struct rte_mempool_ops mempool_ops_custom = {
+	.name = "custom_handler",
+	.alloc = custom_mempool_alloc,
+	.free = custom_mempool_free,
+	.enqueue = custom_mempool_enqueue,
+	.dequeue = custom_mempool_dequeue,
+	.get_count = custom_mempool_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(mempool_ops_custom);
+
+/*
  * save the object number in the first 4 bytes of object data. All
  * other bytes are set to 0.
  */
@@ -292,12 +385,14 @@ static int test_mempool_single_consumer(void)
  * test function for mempool test based on singple consumer and single producer,
  * can run on one lcore only
  */
-static int test_mempool_launch_single_consumer(__attribute__((unused)) void *arg)
+static int
+test_mempool_launch_single_consumer(__attribute__((unused)) void *arg)
 {
 	return test_mempool_single_consumer();
 }
 
-static void my_mp_init(struct rte_mempool * mp, __attribute__((unused)) void * arg)
+static void
+my_mp_init(struct rte_mempool *mp, __attribute__((unused)) void *arg)
 {
 	printf("mempool name is %s\n", mp->name);
 	/* nothing to be implemented here*/
@@ -477,6 +572,7 @@ test_mempool(void)
 {
 	struct rte_mempool *mp_cache = NULL;
 	struct rte_mempool *mp_nocache = NULL;
+	struct rte_mempool *mp_ext = NULL;
 
 	rte_atomic32_init(&synchro);
 
@@ -505,6 +601,27 @@ test_mempool(void)
 		goto err;
 	}
 
+	/* create a mempool with an external handler */
+	mp_ext = rte_mempool_create_empty("test_ext",
+		MEMPOOL_SIZE,
+		MEMPOOL_ELT_SIZE,
+		RTE_MEMPOOL_CACHE_MAX_SIZE, 0,
+		SOCKET_ID_ANY, 0);
+
+	if (mp_ext == NULL) {
+		printf("cannot allocate mp_ext mempool\n");
+		goto err;
+	}
+	if (rte_mempool_set_ops_byname(mp_ext, "custom_handler", NULL) < 0) {
+		printf("cannot set custom handler\n");
+		goto err;
+	}
+	if (rte_mempool_populate_default(mp_ext) < 0) {
+		printf("cannot populate mp_ext mempool\n");
+		goto err;
+	}
+	rte_mempool_obj_iter(mp_ext, my_obj_init, NULL);
+
 	/* retrieve the mempool from its name */
 	if (rte_mempool_lookup("test_nocache") != mp_nocache) {
 		printf("Cannot lookup mempool from its name\n");
@@ -545,6 +662,7 @@ test_mempool(void)
 err:
 	rte_mempool_free(mp_nocache);
 	rte_mempool_free(mp_cache);
+	rte_mempool_free(mp_ext);
 	return -1;
 }
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v15 3/3] mbuf: make default mempool ops configurable at build
  2016-06-19 12:05                           ` [PATCH v15 0/3] mempool: add mempool handler feature David Hunt
  2016-06-19 12:05                             ` [PATCH v15 1/3] mempool: support mempool handler operations David Hunt
  2016-06-19 12:05                             ` [PATCH v15 2/3] app/test: test mempool handler David Hunt
@ 2016-06-19 12:05                             ` David Hunt
  2016-06-22  7:56                             ` [PATCH v15 0/3] mempool: add mempool handler feature Thomas Monjalon
  2016-06-22  9:27                             ` [PATCH v16 " David Hunt
  4 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-06-19 12:05 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain, David Hunt

By default, the mempool ops used for mbuf allocations is a multi
producer and multi consumer ring. We could imagine a target (maybe some
network processors?) that provides an hardware-assisted pool
mechanism. In this case, the default configuration for this architecture
would contain a different value for RTE_MBUF_DEFAULT_MEMPOOL_OPS.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Reviewed-by: Jan Viktorin <viktorin@rehivetech.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 config/common_base         |  1 +
 lib/librte_mbuf/rte_mbuf.c | 26 ++++++++++++++++++++++----
 2 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/config/common_base b/config/common_base
index 11ac81e..5f230db 100644
--- a/config/common_base
+++ b/config/common_base
@@ -394,6 +394,7 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
 #
 CONFIG_RTE_LIBRTE_MBUF=y
 CONFIG_RTE_LIBRTE_MBUF_DEBUG=n
+CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="ring_mp_mc"
 CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 2ece742..8cf5436 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -153,6 +153,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	unsigned cache_size, uint16_t priv_size, uint16_t data_room_size,
 	int socket_id)
 {
+	struct rte_mempool *mp;
 	struct rte_pktmbuf_pool_private mbp_priv;
 	unsigned elt_size;
 
@@ -167,10 +168,27 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	mbp_priv.mbuf_data_room_size = data_room_size;
 	mbp_priv.mbuf_priv_size = priv_size;
 
-	return rte_mempool_create(name, n, elt_size,
-		cache_size, sizeof(struct rte_pktmbuf_pool_private),
-		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
-		socket_id, 0);
+	mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
+		 sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
+	if (mp == NULL)
+		return NULL;
+
+	rte_errno = rte_mempool_set_ops_byname(mp,
+			RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL);
+	if (rte_errno != 0) {
+		RTE_LOG(ERR, MBUF, "error setting mempool handler\n");
+		return NULL;
+	}
+	rte_pktmbuf_pool_init(mp, &mbp_priv);
+
+	if (rte_mempool_populate_default(mp) < 0) {
+		rte_mempool_free(mp);
+		return NULL;
+	}
+
+	rte_mempool_obj_iter(mp, rte_pktmbuf_init, NULL);
+
+	return mp;
 }
 
 /* do some sanity checks on a mbuf: panic if it fails */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* Re: [PATCH v15 0/3] mempool: add mempool handler feature
  2016-06-19 12:05                           ` [PATCH v15 0/3] mempool: add mempool handler feature David Hunt
                                               ` (2 preceding siblings ...)
  2016-06-19 12:05                             ` [PATCH v15 3/3] mbuf: make default mempool ops configurable at build David Hunt
@ 2016-06-22  7:56                             ` Thomas Monjalon
  2016-06-22  8:02                               ` Thomas Monjalon
  2016-06-22  9:27                             ` [PATCH v16 " David Hunt
  4 siblings, 1 reply; 238+ messages in thread
From: Thomas Monjalon @ 2016-06-22  7:56 UTC (permalink / raw)
  To: David Hunt; +Cc: dev, olivier.matz, viktorin, jerin.jacob, shreyansh.jain

2016-06-19 13:05, David Hunt:
> v15 changes:
> 
>  * Changed rte_mempool_ops_get() to rte_mempool_get_ops()

I don't find this change in the patch.
But I wonder wether it is really needed.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v15 0/3] mempool: add mempool handler feature
  2016-06-22  7:56                             ` [PATCH v15 0/3] mempool: add mempool handler feature Thomas Monjalon
@ 2016-06-22  8:02                               ` Thomas Monjalon
  0 siblings, 0 replies; 238+ messages in thread
From: Thomas Monjalon @ 2016-06-22  8:02 UTC (permalink / raw)
  To: David Hunt; +Cc: dev, olivier.matz, viktorin, jerin.jacob, shreyansh.jain

2016-06-22 09:56, Thomas Monjalon:
> 2016-06-19 13:05, David Hunt:
> > v15 changes:
> > 
> >  * Changed rte_mempool_ops_get() to rte_mempool_get_ops()
> 
> I don't find this change in the patch.
> But I wonder wether it is really needed.

If we assume that rte_mempool_ops_* are wrappers on top of handlers,
rte_mempool_ops_get and rte_mempool_ops_register should be renamed to
rte_mempool_get_ops and rte_mempool_register_ops.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v16 0/3] mempool: add mempool handler feature
  2016-06-19 12:05                           ` [PATCH v15 0/3] mempool: add mempool handler feature David Hunt
                                               ` (3 preceding siblings ...)
  2016-06-22  7:56                             ` [PATCH v15 0/3] mempool: add mempool handler feature Thomas Monjalon
@ 2016-06-22  9:27                             ` David Hunt
  2016-06-22  9:27                               ` [PATCH v16 1/3] mempool: support mempool handler operations David Hunt
                                                 ` (3 more replies)
  4 siblings, 4 replies; 238+ messages in thread
From: David Hunt @ 2016-06-22  9:27 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain

Here's the latest version of the Mempool Handler patch set.
It's re-based on top of the latest head as of 20/6/2016, including
Olivier's 35-part patch series on mempool re-org [1]

[1] http://dpdk.org/ml/archives/dev/2016-May/039229.html

v16 changes:

 * Changed rte_mempool_ops_get() to rte_mempool_get_ops()
 * Changed rte_mempool_ops_register() to rte_mempool_register_ops()
 * Applied missing changes that should have been in v15

v15 changes:

 * Changed rte_mempool_ops_get() to rte_mempool_get_ops()
 * Did some minor tweaks to comments after the previous change of function
   names from put/get to enqueue/dequeue
 * Added missing spinlock_unlock in rte_mempool_ops_register()
 * Added check for null in ops_free
 * removed un-needed return statement

v14 changes:

 * set MEMPOOL_F_RING_CREATED flag after rte_mempool_ring_create() is called.
 * Changed name of feature from "external mempool manager" to "mempool handler"
   and updated comments and release notes accordingly.
 * Added a comment for newly added pool_config param in
   rte_mempool_set_ops_byname.

v13 changes:

 * Added in extra opaque data (pool_config) to mempool struct for mempool
   configuration by the ops functions. For example, this can be used to pass
  device names or device flags to the underlying alloc function.
 * Added mempool_config param to rte_mempool_set_ops_byname()

v12 changes:

 * Fixed a comment (function pram h -> ops)
 * fixed a typo (callbacki)

v11 changes:

 * Fixed comments (added '.' where needed for consistency)
 * removed ABI breakage notice for mempool manager in deprecation.rst
 * Added description of the external mempool manager functionality to
   doc/guides/prog_guide/mempool_lib.rst (John Mc reviewed)
 * renamed rte_mempool_default.c to rte_mempool_ring.c

v10 changes:

 * changed the _put/_get op names to _enqueue/_dequeue to be consistent
   with the function names
 * some rte_errno cleanup
 * comment tweaks about when to set pool_data
 * removed an un-needed check for ops->alloc == NULL

v9 changes:

 * added a check for NULL alloc in rte_mempool_ops_register
 * rte_mempool_alloc_t now returns int instead of void*
 * fixed some comment typo's
 * removed some unneeded typecasts
 * changed a return NULL to return -EEXIST in rte_mempool_ops_register
 * fixed rte_mempool_version.map file so builds ok as shared libs
 * moved flags check from rte_mempool_create_empty to rte_mempool_create

v8 changes:

 * merged first three patches in the series into one.
 * changed parameters to ops callback to all be rte_mempool pointer
   rather than than pointer to opaque data or uint64.
 * comment fixes.
 * fixed parameter to _free function (was inconsistent).
 * changed MEMPOOL_F_RING_CREATED to MEMPOOL_F_POOL_CREATED

v7 changes:

 * Changed rte_mempool_handler_table to rte_mempool_ops_table
 * Changed hander_idx to ops_index in rte_mempool struct
 * Reworked comments in rte_mempool.h around ops functions
 * Changed rte_mempool_hander.c to rte_mempool_ops.c
 * Changed all functions containing _handler_ to _ops_
 * Now there is no mention of 'handler' left
 * Other small changes out of review of mailing list

v6 changes:

 * Moved the flags handling from rte_mempool_create_empty to
   rte_mempool_create, as it's only there for backward compatibility
 * Various comment additions and cleanup
 * Renamed rte_mempool_handler to rte_mempool_ops
 * Added a union for *pool and u64 pool_id in struct rte_mempool
 * split the original patch into a few parts for easier review.
 * rename functions with _ext_ to _ops_.
 * addressed review comments
 * renamed put and get functions to enqueue and dequeue
 * changed occurences of rte_mempool_ops to const, as they
   contain function pointers (security)
 * split out the default external mempool handler into a separate
   patch for easier review

v5 changes:
 * rebasing, as it is dependent on another patch series [1]

v4 changes (Olivier Matz):
 * remove the rte_mempool_create_ext() function. To change the handler, the
   user has to do the following:
   - mp = rte_mempool_create_empty()
   - rte_mempool_set_handler(mp, "my_handler")
   - rte_mempool_populate_default(mp)
   This avoids to add another function with more than 10 arguments, duplicating
   the doxygen comments
 * change the api of rte_mempool_alloc_t: only the mempool pointer is required
   as all information is available in it
 * change the api of rte_mempool_free_t: remove return value
 * move inline wrapper functions from the .c to the .h (else they won't be
   inlined). This implies to have one header file (rte_mempool.h), or it
   would have generate cross dependencies issues.
 * remove now unused MEMPOOL_F_INT_HANDLER (note: it was misused anyway due
   to the use of && instead of &)
 * fix build in debug mode (__MEMPOOL_STAT_ADD(mp, put_pool, n) remaining)
 * fix build with shared libraries (global handler has to be declared in
   the .map file)
 * rationalize #include order
 * remove unused function rte_mempool_get_handler_name()
 * rename some structures, fields, functions
 * remove the static in front of rte_tailq_elem rte_mempool_tailq (comment
   from Yuanhan)
 * test the ext mempool handler in the same file than standard mempool tests,
   avoiding to duplicate the code
 * rework the custom handler in mempool_test
 * rework a bit the patch selecting default mbuf pool handler
 * fix some doxygen comments

v3 changes:
 * simplified the file layout, renamed to rte_mempool_handler.[hc]
 * moved the default handlers into rte_mempool_default.c
 * moved the example handler out into app/test/test_ext_mempool.c
 * removed is_mc/is_mp change, slight perf degredation on sp cached operation
 * removed stack hanler, may re-introduce at a later date
 * Changes out of code reviews

v2 changes:
 * There was a lot of duplicate code between rte_mempool_xmem_create and
   rte_mempool_create_ext. This has now been refactored and is now
   hopefully cleaner.
 * The RTE_NEXT_ABI define is now used to allow building of the library
   in a format that is compatible with binaries built against previous
   versions of DPDK.
 * Changes out of code reviews. Hopefully I've got most of them included.

The Mempool Handler feature is an extension to the mempool API that allows
users to add and use an alternative mempool handler, which allows
external memory subsystems such as external hardware memory management
systems and software based memory allocators to be used with DPDK.

The existing API to the internal DPDK mempool handler will remain unchanged
and will be backward compatible. However, there will be an ABI breakage, as
the mempool struct is changing.

There are two aspects to mempool handlers.
  1. Adding the code for your new mempool operations (ops). This is
     achieved by adding a new mempool ops source file into the
     librte_mempool library, and using the REGISTER_MEMPOOL_OPS macro.
  2. Using the new API to call rte_mempool_create_empty and
     rte_mempool_set_ops_byname to create a new mempool
     using the name parameter to identify which ops to use.

New API calls added
 1. A new rte_mempool_create_empty() function
 2. rte_mempool_set_ops_byname() which sets the mempools ops (functions)
 3. An rte_mempool_populate_default() and rte_mempool_populate_anon() functions
    which populates the mempool using the relevant ops

Several mempool handlers may be used in the same application. A new
mempool can then be created by using the new rte_mempool_create_empty function,
then calling rte_mempool_set_ops_byname to point the mempool to the relevant
mempool handler callback (ops) structure.

Legacy applications will continue to use the old rte_mempool_create API call,
which uses a ring based mempool handler by default. These applications
will need to be modified to use a new mempool handler.

A mempool handler needs to provide the following functions.
 1. alloc     - allocates the mempool memory, and adds each object onto a ring
 2. enqueue   - puts an object back into the mempool once an application has
                finished with it
 3. dequeue   - gets an object from the mempool for use by the application
 4. get_count - gets the number of available objects in the mempool
 5. free      - frees the mempool memory

Every time an enqueue/dequeue/get_count is called from the application/PMD,
the callback for that mempool is called. These functions are in the fastpath,
and any unoptimised ops may limit performance.

The new APIs are as follows:

1. rte_mempool_create_empty

struct rte_mempool *
rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
    unsigned cache_size, unsigned private_data_size,
    int socket_id, unsigned flags);

2. rte_mempool_set_ops_byname()

int
rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name
    void *pool_config);

3. rte_mempool_populate_default()

int rte_mempool_populate_default(struct rte_mempool *mp);

4. rte_mempool_populate_anon()

int rte_mempool_populate_anon(struct rte_mempool *mp);

Please see rte_mempool.h for further information on the parameters.


The important thing to note is that the mempool ops struct is passed by name
to rte_mempool_set_ops_byname, which looks through the ops struct array to
get the ops_index, which is then stored in the rte_memool structure. This
allow multiple processes to use the same mempool, as the function pointers
are accessed via ops index.

The mempool ops structure contains callbacks to the implementation of
the ops function, and is set up for registration as follows:

static const struct rte_mempool_ops ops_sp_mc = {
    .name = "ring_sp_mc",
    .alloc = rte_mempool_common_ring_alloc,
    .enqueue = common_ring_sp_enqueue,
    .dequeue = common_ring_mc_dequeue,
    .get_count = common_ring_get_count,
    .free = common_ring_free,
};

And then the following macro will register the ops in the array of ops
structures

REGISTER_MEMPOOL_OPS(ops_mp_mc);

For an example of API usage, please see app/test/test_mempool.c, which
implements a rudimentary "custom_handler" mempool handler using simple mallocs
for each mempool object. This file also contains the callbacks and self
registration for the new handler.

David Hunt (2):
  mempool: support mempool handler operations
  mbuf: make default mempool ops configurable at build

Olivier Matz (1):
  app/test: test mempool handler

^ permalink raw reply	[flat|nested] 238+ messages in thread

* [PATCH v16 1/3] mempool: support mempool handler operations
  2016-06-22  9:27                             ` [PATCH v16 " David Hunt
@ 2016-06-22  9:27                               ` David Hunt
  2016-06-22  9:27                               ` [PATCH v16 2/3] app/test: test mempool handler David Hunt
                                                 ` (2 subsequent siblings)
  3 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-06-22  9:27 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain, David Hunt

Until now, the objects stored in a mempool were internally stored in a
ring. This patch introduces the possibility to register external handlers
replacing the ring.

The default behavior remains unchanged, but calling the new function
rte_mempool_set_ops_byname() right after rte_mempool_create_empty() allows
the user to change the handler that will be used when populating
the mempool.

This patch also adds a set of default ops (function callbacks) based
on rte_ring.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test/test_mempool_perf.c               |   1 -
 doc/guides/prog_guide/mempool_lib.rst      |  32 +++-
 doc/guides/rel_notes/deprecation.rst       |   9 -
 lib/librte_mempool/Makefile                |   2 +
 lib/librte_mempool/rte_mempool.c           |  67 +++-----
 lib/librte_mempool/rte_mempool.h           | 255 ++++++++++++++++++++++++++---
 lib/librte_mempool/rte_mempool_ops.c       | 151 +++++++++++++++++
 lib/librte_mempool/rte_mempool_ring.c      | 161 ++++++++++++++++++
 lib/librte_mempool/rte_mempool_version.map |  13 +-
 9 files changed, 610 insertions(+), 81 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_ops.c
 create mode 100644 lib/librte_mempool/rte_mempool_ring.c

diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index c5e3576..c5f8455 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
 							   n_get_bulk);
 				if (unlikely(ret < 0)) {
 					rte_mempool_dump(stdout, mp);
-					rte_ring_dump(stdout, mp->ring);
 					/* in this case, objects are lost... */
 					return -1;
 				}
diff --git a/doc/guides/prog_guide/mempool_lib.rst b/doc/guides/prog_guide/mempool_lib.rst
index c3afc2e..1943fc4 100644
--- a/doc/guides/prog_guide/mempool_lib.rst
+++ b/doc/guides/prog_guide/mempool_lib.rst
@@ -34,7 +34,8 @@ Mempool Library
 ===============
 
 A memory pool is an allocator of a fixed-sized object.
-In the DPDK, it is identified by name and uses a ring to store free objects.
+In the DPDK, it is identified by name and uses a mempool handler to store free objects.
+The default mempool handler is ring based.
 It provides some other optional services such as a per-core object cache and
 an alignment helper to ensure that objects are padded to spread them equally on all DRAM or DDR3 channels.
 
@@ -127,6 +128,35 @@ The maximum size of the cache is static and is defined at compilation time (CONF
    A mempool in Memory with its Associated Ring
 
 
+Mempool Handlers
+------------------------
+
+This allows external memory subsystems, such as external hardware memory
+management systems and software based memory allocators, to be used with DPDK.
+
+There are two aspects to a mempool handler.
+
+* Adding the code for your new mempool operations (ops). This is achieved by
+  adding a new mempool ops code, and using the ``REGISTER_MEMPOOL_OPS`` macro.
+
+* Using the new API to call ``rte_mempool_create_empty()`` and
+  ``rte_mempool_set_ops_byname()`` to create a new mempool and specifying which
+  ops to use.
+
+Several different mempool handlers may be used in the same application. A new
+mempool can be created by using the ``rte_mempool_create_empty()`` function,
+then using ``rte_mempool_set_ops_byname()`` to point the mempool to the
+relevant mempool handler callback (ops) structure.
+
+Legacy applications may continue to use the old ``rte_mempool_create()`` API
+call, which uses a ring based mempool handler by default. These applications
+will need to be modified to use a new mempool handler.
+
+For applications that use ``rte_pktmbuf_create()``, there is a config setting
+(``RTE_MBUF_DEFAULT_MEMPOOL_OPS``) that allows the application to make use of
+an alternative mempool handler.
+
+
 Use Cases
 ---------
 
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index f75183f..3cbc19e 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -34,15 +34,6 @@ Deprecation Notices
   compact API. The ones that remain are backwards compatible and use the
   per-lcore default cache if available. This change targets release 16.07.
 
-* The rte_mempool struct will be changed in 16.07 to facilitate the new
-  external mempool manager functionality.
-  The ring element will be replaced with a more generic 'pool' opaque pointer
-  to allow new mempool handlers to use their own user-defined mempool
-  layout. Also newly added to rte_mempool is a handler index.
-  The existing API will be backward compatible, but there will be new API
-  functions added to facilitate the creation of mempools using an external
-  handler. The 16.07 release will contain these changes.
-
 * A librte_vhost public structures refactor is planned for DPDK 16.07
   that requires both ABI and API change.
   The proposed refactor would expose DPDK vhost dev to applications as
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index 43423e0..a4c089e 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -42,6 +42,8 @@ LIBABIVER := 2
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_ring.c
 # install includes
 SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
 
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index af71edd..e6a83d0 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -148,7 +148,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, phys_addr_t physaddr)
 #endif
 
 	/* enqueue in ring */
-	rte_ring_sp_enqueue(mp->ring, obj);
+	rte_mempool_ops_enqueue_bulk(mp, &obj, 1);
 }
 
 /* call obj_cb() for each mempool element */
@@ -303,40 +303,6 @@ rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	return (size_t)paddr_idx << pg_shift;
 }
 
-/* create the internal ring */
-static int
-rte_mempool_ring_create(struct rte_mempool *mp)
-{
-	int rg_flags = 0, ret;
-	char rg_name[RTE_RING_NAMESIZE];
-	struct rte_ring *r;
-
-	ret = snprintf(rg_name, sizeof(rg_name),
-		RTE_MEMPOOL_MZ_FORMAT, mp->name);
-	if (ret < 0 || ret >= (int)sizeof(rg_name))
-		return -ENAMETOOLONG;
-
-	/* ring flags */
-	if (mp->flags & MEMPOOL_F_SP_PUT)
-		rg_flags |= RING_F_SP_ENQ;
-	if (mp->flags & MEMPOOL_F_SC_GET)
-		rg_flags |= RING_F_SC_DEQ;
-
-	/* Allocate the ring that will be used to store objects.
-	 * Ring functions will return appropriate errors if we are
-	 * running as a secondary process etc., so no checks made
-	 * in this function for that condition.
-	 */
-	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
-		mp->socket_id, rg_flags);
-	if (r == NULL)
-		return -rte_errno;
-
-	mp->ring = r;
-	mp->flags |= MEMPOOL_F_RING_CREATED;
-	return 0;
-}
-
 /* free a memchunk allocated with rte_memzone_reserve() */
 static void
 rte_mempool_memchunk_mz_free(__rte_unused struct rte_mempool_memhdr *memhdr,
@@ -354,7 +320,7 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
 	void *elt;
 
 	while (!STAILQ_EMPTY(&mp->elt_list)) {
-		rte_ring_sc_dequeue(mp->ring, &elt);
+		rte_mempool_ops_dequeue_bulk(mp, &elt, 1);
 		(void)elt;
 		STAILQ_REMOVE_HEAD(&mp->elt_list, next);
 		mp->populated_size--;
@@ -386,10 +352,11 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	int ret;
 
 	/* create the internal ring if not already done */
-	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
-		ret = rte_mempool_ring_create(mp);
-		if (ret < 0)
+	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
+		ret = rte_mempool_ops_alloc(mp);
+		if (ret != 0)
 			return ret;
+		mp->flags |= MEMPOOL_F_POOL_CREATED;
 	}
 
 	/* mempool is already populated */
@@ -703,7 +670,7 @@ rte_mempool_free(struct rte_mempool *mp)
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
 
 	rte_mempool_free_memchunks(mp);
-	rte_ring_free(mp->ring);
+	rte_mempool_ops_free(mp);
 	rte_memzone_free(mp->mz);
 }
 
@@ -815,6 +782,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 		RTE_PTR_ADD(mp, MEMPOOL_HEADER_SIZE(mp, 0));
 
 	te->data = mp;
+
 	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
 	TAILQ_INSERT_TAIL(mempool_list, te, next);
 	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
@@ -844,6 +812,19 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
 	if (mp == NULL)
 		return NULL;
 
+	/*
+	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
+	 * set the correct index into the table of ops structs.
+	 */
+	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
+		rte_mempool_set_ops_byname(mp, "ring_sp_sc", NULL);
+	else if (flags & MEMPOOL_F_SP_PUT)
+		rte_mempool_set_ops_byname(mp, "ring_sp_mc", NULL);
+	else if (flags & MEMPOOL_F_SC_GET)
+		rte_mempool_set_ops_byname(mp, "ring_mp_sc", NULL);
+	else
+		rte_mempool_set_ops_byname(mp, "ring_mp_mc", NULL);
+
 	/* call the mempool priv initializer */
 	if (mp_init)
 		mp_init(mp, mp_init_arg);
@@ -930,7 +911,7 @@ rte_mempool_count(const struct rte_mempool *mp)
 	unsigned count;
 	unsigned lcore_id;
 
-	count = rte_ring_count(mp->ring);
+	count = rte_mempool_ops_get_count(mp);
 
 	if (mp->cache_size == 0)
 		return count;
@@ -1119,7 +1100,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 
 	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
 	fprintf(f, "  flags=%x\n", mp->flags);
-	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
+	fprintf(f, "  pool=%p\n", mp->pool_data);
 	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->mz->phys_addr);
 	fprintf(f, "  nb_mem_chunks=%u\n", mp->nb_mem_chunks);
 	fprintf(f, "  size=%"PRIu32"\n", mp->size);
@@ -1140,7 +1121,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 	}
 
 	cache_count = rte_mempool_dump_cache(f, mp);
-	common_count = rte_ring_count(mp->ring);
+	common_count = rte_mempool_ops_get_count(mp);
 	if ((cache_count + common_count) > mp->size)
 		common_count = mp->size - cache_count;
 	fprintf(f, "  common_pool_count=%u\n", common_count);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 60339bd..0a1777c 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -67,6 +67,7 @@
 #include <inttypes.h>
 #include <sys/queue.h>
 
+#include <rte_spinlock.h>
 #include <rte_log.h>
 #include <rte_debug.h>
 #include <rte_lcore.h>
@@ -203,10 +204,14 @@ struct rte_mempool_memhdr {
  */
 struct rte_mempool {
 	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
-	struct rte_ring *ring;           /**< Ring to store objects. */
-	const struct rte_memzone *mz;    /**< Memzone where pool is allocated */
+	union {
+		void *pool_data;         /**< Ring or pool to store objects. */
+		uint64_t pool_id;        /**< External mempool identifier. */
+	};
+	void *pool_config;               /**< optional args for ops alloc. */
+	const struct rte_memzone *mz;    /**< Memzone where pool is alloc'd. */
 	int flags;                       /**< Flags of the mempool. */
-	int socket_id;                   /**< Socket id passed at mempool creation. */
+	int socket_id;                   /**< Socket id passed at create. */
 	uint32_t size;                   /**< Max size of the mempool. */
 	uint32_t cache_size;             /**< Size of per-lcore local cache. */
 	uint32_t cache_flushthresh;
@@ -217,6 +222,14 @@ struct rte_mempool {
 	uint32_t trailer_size;           /**< Size of trailer (after elt). */
 
 	unsigned private_data_size;      /**< Size of private data. */
+	/**
+	 * Index into rte_mempool_ops_table array of mempool ops
+	 * structs, which contain callback function pointers.
+	 * We're using an index here rather than pointers to the callbacks
+	 * to facilitate any secondary processes that may want to use
+	 * this mempool.
+	 */
+	int32_t ops_index;
 
 	struct rte_mempool_cache *local_cache; /**< Per-lcore local cache */
 
@@ -235,7 +248,7 @@ struct rte_mempool {
 #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
 #define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
-#define MEMPOOL_F_RING_CREATED   0x0010 /**< Internal: ring is created */
+#define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
 
 /**
@@ -325,6 +338,215 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 #define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
 #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
 
+#define RTE_MEMPOOL_OPS_NAMESIZE 32 /**< Max length of ops struct name. */
+
+/**
+ * Prototype for implementation specific data provisioning function.
+ *
+ * The function should provide the implementation specific memory for
+ * for use by the other mempool ops functions in a given mempool ops struct.
+ * E.g. the default ops provides an instance of the rte_ring for this purpose.
+ * it will most likely point to a different type of data structure, and
+ * will be transparent to the application programmer.
+ * This function should set mp->pool_data.
+ */
+typedef int (*rte_mempool_alloc_t)(struct rte_mempool *mp);
+
+/**
+ * Free the opaque private data pointed to by mp->pool_data pointer.
+ */
+typedef void (*rte_mempool_free_t)(struct rte_mempool *mp);
+
+/**
+ * Enqueue an object into the external pool.
+ */
+typedef int (*rte_mempool_enqueue_t)(struct rte_mempool *mp,
+		void * const *obj_table, unsigned int n);
+
+/**
+ * Dequeue an object from the external pool.
+ */
+typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
+		void **obj_table, unsigned int n);
+
+/**
+ * Return the number of available objects in the external pool.
+ */
+typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
+
+/** Structure defining mempool operations structure */
+struct rte_mempool_ops {
+	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
+	rte_mempool_alloc_t alloc;       /**< Allocate private data. */
+	rte_mempool_free_t free;         /**< Free the external pool. */
+	rte_mempool_enqueue_t enqueue;   /**< Enqueue an object. */
+	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
+	rte_mempool_get_count get_count; /**< Get qty of available objs. */
+} __rte_cache_aligned;
+
+#define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
+
+/**
+ * Structure storing the table of registered ops structs, each of which contain
+ * the function pointers for the mempool ops functions.
+ * Each process has its own storage for this ops struct array so that
+ * the mempools can be shared across primary and secondary processes.
+ * The indices used to access the array are valid across processes, whereas
+ * any function pointers stored directly in the mempool struct would not be.
+ * This results in us simply having "ops_index" in the mempool struct.
+ */
+struct rte_mempool_ops_table {
+	rte_spinlock_t sl;     /**< Spinlock for add/delete. */
+	uint32_t num_ops;      /**< Number of used ops structs in the table. */
+	/**
+	 * Storage for all possible ops structs.
+	 */
+	struct rte_mempool_ops ops[RTE_MEMPOOL_MAX_OPS_IDX];
+} __rte_cache_aligned;
+
+/** Array of registered ops structs. */
+extern struct rte_mempool_ops_table rte_mempool_ops_table;
+
+/**
+ * @internal Get the mempool ops struct from its index.
+ *
+ * @param ops_index
+ *   The index of the ops struct in the ops struct table. It must be a valid
+ *   index: (0 <= idx < num_ops).
+ * @return
+ *   The pointer to the ops struct in the table.
+ */
+static inline struct rte_mempool_ops *
+rte_mempool_get_ops(int ops_index)
+{
+	RTE_VERIFY((ops_index >= 0) && (ops_index < RTE_MEMPOOL_MAX_OPS_IDX));
+
+	return &rte_mempool_ops_table.ops[ops_index];
+}
+
+/**
+ * @internal Wrapper for mempool_ops alloc callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   - 0: Success; successfully allocated mempool pool_data.
+ *   - <0: Error; code of alloc function.
+ */
+int
+rte_mempool_ops_alloc(struct rte_mempool *mp);
+
+/**
+ * @internal Wrapper for mempool_ops dequeue callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to get.
+ * @return
+ *   - 0: Success; got n objects.
+ *   - <0: Error; code of dequeue function.
+ */
+static inline int
+rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
+		void **obj_table, unsigned n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+	return ops->dequeue(mp, obj_table, n);
+}
+
+/**
+ * @internal wrapper for mempool_ops enqueue callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Number of objects to put.
+ * @return
+ *   - 0: Success; n objects supplied.
+ *   - <0: Error; code of enqueue function.
+ */
+static inline int
+rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+	return ops->enqueue(mp, obj_table, n);
+}
+
+/**
+ * @internal wrapper for mempool_ops get_count callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   The number of available objects in the external pool.
+ */
+unsigned
+rte_mempool_ops_get_count(const struct rte_mempool *mp);
+
+/**
+ * @internal wrapper for mempool_ops free callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ */
+void
+rte_mempool_ops_free(struct rte_mempool *mp);
+
+/**
+ * Set the ops of a mempool.
+ *
+ * This can only be done on a mempool that is not populated, i.e. just after
+ * a call to rte_mempool_create_empty().
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param name
+ *   Name of the ops structure to use for this mempool.
+ * @param pool_config
+ *   Opaque data that can be passed by the application to the ops functions.
+ * @return
+ *   - 0: Success; the mempool is now using the requested ops functions.
+ *   - -EINVAL - Invalid ops struct name provided.
+ *   - -EEXIST - mempool already has an ops struct assigned.
+ */
+int
+rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
+		void *pool_config);
+
+/**
+ * Register mempool operations.
+ *
+ * @param ops
+ *   Pointer to an ops structure to register.
+ * @return
+ *   - >=0: Success; return the index of the ops struct in the table.
+ *   - -EINVAL - some missing callbacks while registering ops struct.
+ *   - -ENOSPC - the maximum number of ops structs has been reached.
+ */
+int rte_mempool_register_ops(const struct rte_mempool_ops *ops);
+
+/**
+ * Macro to statically register the ops of a mempool handler.
+ * Note that the rte_mempool_register_ops fails silently here when
+ * more then RTE_MEMPOOL_MAX_OPS_IDX is registered.
+ */
+#define MEMPOOL_REGISTER_OPS(ops)					\
+	void mp_hdlr_init_##ops(void);					\
+	void __attribute__((constructor, used)) mp_hdlr_init_##ops(void)\
+	{								\
+		rte_mempool_register_ops(&ops);			\
+	}
+
 /**
  * An object callback function for mempool.
  *
@@ -774,7 +996,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 	cache->len += n;
 
 	if (cache->len >= flushthresh) {
-		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
+		rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache_size],
 				cache->len - cache_size);
 		cache->len = cache_size;
 	}
@@ -785,19 +1007,10 @@ ring_enqueue:
 
 	/* push remaining objects in ring */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	if (is_mp) {
-		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
-	else {
-		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
+	if (rte_mempool_ops_enqueue_bulk(mp, obj_table, n) < 0)
+		rte_panic("cannot put objects in mempool\n");
 #else
-	if (is_mp)
-		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
-	else
-		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
+	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);
 #endif
 }
 
@@ -945,7 +1158,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 		uint32_t req = n + (cache_size - cache->len);
 
 		/* How many do we require i.e. number to fill the cache + the request */
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
+		ret = rte_mempool_ops_dequeue_bulk(mp,
+			&cache->objs[cache->len], req);
 		if (unlikely(ret < 0)) {
 			/*
 			 * In the offchance that we are buffer constrained,
@@ -972,10 +1186,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
 ring_dequeue:
 
 	/* get remaining objects from ring */
-	if (is_mc)
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
-	else
-		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
+	ret = rte_mempool_ops_dequeue_bulk(mp, obj_table, n);
 
 	if (ret < 0)
 		__MEMPOOL_STAT_ADD(mp, get_fail, n);
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
new file mode 100644
index 0000000..fd0b64c
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -0,0 +1,151 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2016 6WIND S.A.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_mempool.h>
+#include <rte_errno.h>
+
+/* indirect jump table to support external memory pools. */
+struct rte_mempool_ops_table rte_mempool_ops_table = {
+	.sl =  RTE_SPINLOCK_INITIALIZER,
+	.num_ops = 0
+};
+
+/* add a new ops struct in rte_mempool_ops_table, return its index. */
+int
+rte_mempool_register_ops(const struct rte_mempool_ops *h)
+{
+	struct rte_mempool_ops *ops;
+	int16_t ops_index;
+
+	rte_spinlock_lock(&rte_mempool_ops_table.sl);
+
+	if (rte_mempool_ops_table.num_ops >=
+			RTE_MEMPOOL_MAX_OPS_IDX) {
+		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Maximum number of mempool ops structs exceeded\n");
+		return -ENOSPC;
+	}
+
+	if (h->alloc == NULL || h->enqueue == NULL ||
+			h->dequeue == NULL || h->get_count == NULL) {
+		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+		RTE_LOG(ERR, MEMPOOL,
+			"Missing callback while registering mempool ops\n");
+		return -EINVAL;
+	}
+
+	if (strlen(h->name) >= sizeof(ops->name) - 1) {
+		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+		RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n",
+				__func__, h->name);
+		rte_errno = EEXIST;
+		return -EEXIST;
+	}
+
+	ops_index = rte_mempool_ops_table.num_ops++;
+	ops = &rte_mempool_ops_table.ops[ops_index];
+	snprintf(ops->name, sizeof(ops->name), "%s", h->name);
+	ops->alloc = h->alloc;
+	ops->enqueue = h->enqueue;
+	ops->dequeue = h->dequeue;
+	ops->get_count = h->get_count;
+
+	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+
+	return ops_index;
+}
+
+/* wrapper to allocate an external mempool's private (pool) data. */
+int
+rte_mempool_ops_alloc(struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+	return ops->alloc(mp);
+}
+
+/* wrapper to free an external pool ops. */
+void
+rte_mempool_ops_free(struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+	if (ops->free == NULL)
+		return;
+	ops->free(mp);
+}
+
+/* wrapper to get available objects in an external mempool. */
+unsigned int
+rte_mempool_ops_get_count(const struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+	return ops->get_count(mp);
+}
+
+/* sets mempool ops previously registered by rte_mempool_register_ops. */
+int
+rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
+	void *pool_config)
+{
+	struct rte_mempool_ops *ops = NULL;
+	unsigned i;
+
+	/* too late, the mempool is already populated. */
+	if (mp->flags & MEMPOOL_F_POOL_CREATED)
+		return -EEXIST;
+
+	for (i = 0; i < rte_mempool_ops_table.num_ops; i++) {
+		if (!strcmp(name,
+				rte_mempool_ops_table.ops[i].name)) {
+			ops = &rte_mempool_ops_table.ops[i];
+			break;
+		}
+	}
+
+	if (ops == NULL)
+		return -EINVAL;
+
+	mp->ops_index = i;
+	mp->pool_config = pool_config;
+	return 0;
+}
diff --git a/lib/librte_mempool/rte_mempool_ring.c b/lib/librte_mempool/rte_mempool_ring.c
new file mode 100644
index 0000000..b9aa64d
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_ring.c
@@ -0,0 +1,161 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+
+static int
+common_ring_mp_enqueue(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	return rte_ring_mp_enqueue_bulk(mp->pool_data, obj_table, n);
+}
+
+static int
+common_ring_sp_enqueue(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	return rte_ring_sp_enqueue_bulk(mp->pool_data, obj_table, n);
+}
+
+static int
+common_ring_mc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	return rte_ring_mc_dequeue_bulk(mp->pool_data, obj_table, n);
+}
+
+static int
+common_ring_sc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	return rte_ring_sc_dequeue_bulk(mp->pool_data, obj_table, n);
+}
+
+static unsigned
+common_ring_get_count(const struct rte_mempool *mp)
+{
+	return rte_ring_count(mp->pool_data);
+}
+
+
+static int
+common_ring_alloc(struct rte_mempool *mp)
+{
+	int rg_flags = 0, ret;
+	char rg_name[RTE_RING_NAMESIZE];
+	struct rte_ring *r;
+
+	ret = snprintf(rg_name, sizeof(rg_name),
+		RTE_MEMPOOL_MZ_FORMAT, mp->name);
+	if (ret < 0 || ret >= (int)sizeof(rg_name)) {
+		rte_errno = ENAMETOOLONG;
+		return -rte_errno;
+	}
+
+	/* ring flags */
+	if (mp->flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (mp->flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	/*
+	 * Allocate the ring that will be used to store objects.
+	 * Ring functions will return appropriate errors if we are
+	 * running as a secondary process etc., so no checks made
+	 * in this function for that condition.
+	 */
+	r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+		mp->socket_id, rg_flags);
+	if (r == NULL)
+		return -rte_errno;
+
+	mp->pool_data = r;
+
+	return 0;
+}
+
+static void
+common_ring_free(struct rte_mempool *mp)
+{
+	rte_ring_free(mp->pool_data);
+}
+
+/*
+ * The following 4 declarations of mempool ops structs address
+ * the need for the backward compatible mempool handlers for
+ * single/multi producers and single/multi consumers as dictated by the
+ * flags provided to the rte_mempool_create function
+ */
+static const struct rte_mempool_ops ops_mp_mc = {
+	.name = "ring_mp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_mp_enqueue,
+	.dequeue = common_ring_mc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_sp_sc = {
+	.name = "ring_sp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_sp_enqueue,
+	.dequeue = common_ring_sc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_mp_sc = {
+	.name = "ring_mp_sc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_mp_enqueue,
+	.dequeue = common_ring_sc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_sp_mc = {
+	.name = "ring_sp_mc",
+	.alloc = common_ring_alloc,
+	.free = common_ring_free,
+	.enqueue = common_ring_sp_enqueue,
+	.dequeue = common_ring_mc_dequeue,
+	.get_count = common_ring_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(ops_mp_mc);
+MEMPOOL_REGISTER_OPS(ops_sp_sc);
+MEMPOOL_REGISTER_OPS(ops_mp_sc);
+MEMPOOL_REGISTER_OPS(ops_sp_mc);
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index f63461b..a4a6c1f 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -20,15 +20,18 @@ DPDK_16.7 {
 	global:
 
 	rte_mempool_check_cookies;
-	rte_mempool_obj_iter;
-	rte_mempool_mem_iter;
 	rte_mempool_create_empty;
+	rte_mempool_free;
+	rte_mempool_mem_iter;
+	rte_mempool_obj_iter;
+	rte_mempool_ops_table;
+	rte_mempool_populate_anon;
+	rte_mempool_populate_default;
 	rte_mempool_populate_phys;
 	rte_mempool_populate_phys_tab;
 	rte_mempool_populate_virt;
-	rte_mempool_populate_default;
-	rte_mempool_populate_anon;
-	rte_mempool_free;
+	rte_mempool_register_ops;
+	rte_mempool_set_ops_byname;
 
 	local: *;
 } DPDK_2.0;
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v16 2/3] app/test: test mempool handler
  2016-06-22  9:27                             ` [PATCH v16 " David Hunt
  2016-06-22  9:27                               ` [PATCH v16 1/3] mempool: support mempool handler operations David Hunt
@ 2016-06-22  9:27                               ` David Hunt
  2016-06-22  9:27                               ` [PATCH v16 3/3] mbuf: make default mempool ops configurable at build David Hunt
  2016-06-23 21:22                               ` [PATCH v16 0/3] mempool: add mempool handler feature Thomas Monjalon
  3 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-06-22  9:27 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain, David Hunt

Create a minimal custom mempool handler and check that it
passes basic mempool autotests.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Reviewed-by: Jan Viktorin <viktorin@rehivetech.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test/test_mempool.c | 122 +++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 120 insertions(+), 2 deletions(-)

diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index b586249..31582d8 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -83,6 +83,99 @@
 static rte_atomic32_t synchro;
 
 /*
+ * Simple example of custom mempool structure. Holds pointers to all the
+ * elements which are simply malloc'd in this example.
+ */
+struct custom_mempool {
+	rte_spinlock_t lock;
+	unsigned count;
+	unsigned size;
+	void *elts[];
+};
+
+/*
+ * Loop through all the element pointers and allocate a chunk of memory, then
+ * insert that memory into the ring.
+ */
+static int
+custom_mempool_alloc(struct rte_mempool *mp)
+{
+	struct custom_mempool *cm;
+
+	cm = rte_zmalloc("custom_mempool",
+		sizeof(struct custom_mempool) + mp->size * sizeof(void *), 0);
+	if (cm == NULL)
+		return -ENOMEM;
+
+	rte_spinlock_init(&cm->lock);
+	cm->count = 0;
+	cm->size = mp->size;
+	mp->pool_data = cm;
+	return 0;
+}
+
+static void
+custom_mempool_free(struct rte_mempool *mp)
+{
+	rte_free((void *)(mp->pool_data));
+}
+
+static int
+custom_mempool_enqueue(struct rte_mempool *mp, void * const *obj_table,
+		unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (cm->count + n > cm->size) {
+		ret = -ENOBUFS;
+	} else {
+		memcpy(&cm->elts[cm->count], obj_table, sizeof(void *) * n);
+		cm->count += n;
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+
+static int
+custom_mempool_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+	int ret = 0;
+
+	rte_spinlock_lock(&cm->lock);
+	if (n > cm->count) {
+		ret = -ENOENT;
+	} else {
+		cm->count -= n;
+		memcpy(obj_table, &cm->elts[cm->count], sizeof(void *) * n);
+	}
+	rte_spinlock_unlock(&cm->lock);
+	return ret;
+}
+
+static unsigned
+custom_mempool_get_count(const struct rte_mempool *mp)
+{
+	struct custom_mempool *cm = (struct custom_mempool *)(mp->pool_data);
+
+	return cm->count;
+}
+
+static struct rte_mempool_ops mempool_ops_custom = {
+	.name = "custom_handler",
+	.alloc = custom_mempool_alloc,
+	.free = custom_mempool_free,
+	.enqueue = custom_mempool_enqueue,
+	.dequeue = custom_mempool_dequeue,
+	.get_count = custom_mempool_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(mempool_ops_custom);
+
+/*
  * save the object number in the first 4 bytes of object data. All
  * other bytes are set to 0.
  */
@@ -292,12 +385,14 @@ static int test_mempool_single_consumer(void)
  * test function for mempool test based on singple consumer and single producer,
  * can run on one lcore only
  */
-static int test_mempool_launch_single_consumer(__attribute__((unused)) void *arg)
+static int
+test_mempool_launch_single_consumer(__attribute__((unused)) void *arg)
 {
 	return test_mempool_single_consumer();
 }
 
-static void my_mp_init(struct rte_mempool * mp, __attribute__((unused)) void * arg)
+static void
+my_mp_init(struct rte_mempool *mp, __attribute__((unused)) void *arg)
 {
 	printf("mempool name is %s\n", mp->name);
 	/* nothing to be implemented here*/
@@ -477,6 +572,7 @@ test_mempool(void)
 {
 	struct rte_mempool *mp_cache = NULL;
 	struct rte_mempool *mp_nocache = NULL;
+	struct rte_mempool *mp_ext = NULL;
 
 	rte_atomic32_init(&synchro);
 
@@ -505,6 +601,27 @@ test_mempool(void)
 		goto err;
 	}
 
+	/* create a mempool with an external handler */
+	mp_ext = rte_mempool_create_empty("test_ext",
+		MEMPOOL_SIZE,
+		MEMPOOL_ELT_SIZE,
+		RTE_MEMPOOL_CACHE_MAX_SIZE, 0,
+		SOCKET_ID_ANY, 0);
+
+	if (mp_ext == NULL) {
+		printf("cannot allocate mp_ext mempool\n");
+		goto err;
+	}
+	if (rte_mempool_set_ops_byname(mp_ext, "custom_handler", NULL) < 0) {
+		printf("cannot set custom handler\n");
+		goto err;
+	}
+	if (rte_mempool_populate_default(mp_ext) < 0) {
+		printf("cannot populate mp_ext mempool\n");
+		goto err;
+	}
+	rte_mempool_obj_iter(mp_ext, my_obj_init, NULL);
+
 	/* retrieve the mempool from its name */
 	if (rte_mempool_lookup("test_nocache") != mp_nocache) {
 		printf("Cannot lookup mempool from its name\n");
@@ -545,6 +662,7 @@ test_mempool(void)
 err:
 	rte_mempool_free(mp_nocache);
 	rte_mempool_free(mp_cache);
+	rte_mempool_free(mp_ext);
 	return -1;
 }
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* [PATCH v16 3/3] mbuf: make default mempool ops configurable at build
  2016-06-22  9:27                             ` [PATCH v16 " David Hunt
  2016-06-22  9:27                               ` [PATCH v16 1/3] mempool: support mempool handler operations David Hunt
  2016-06-22  9:27                               ` [PATCH v16 2/3] app/test: test mempool handler David Hunt
@ 2016-06-22  9:27                               ` David Hunt
  2016-06-23 21:22                               ` [PATCH v16 0/3] mempool: add mempool handler feature Thomas Monjalon
  3 siblings, 0 replies; 238+ messages in thread
From: David Hunt @ 2016-06-22  9:27 UTC (permalink / raw)
  To: dev; +Cc: olivier.matz, viktorin, jerin.jacob, shreyansh.jain, David Hunt

By default, the mempool ops used for mbuf allocations is a multi
producer and multi consumer ring. We could imagine a target (maybe some
network processors?) that provides an hardware-assisted pool
mechanism. In this case, the default configuration for this architecture
would contain a different value for RTE_MBUF_DEFAULT_MEMPOOL_OPS.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Reviewed-by: Jan Viktorin <viktorin@rehivetech.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 config/common_base         |  1 +
 lib/librte_mbuf/rte_mbuf.c | 26 ++++++++++++++++++++++----
 2 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/config/common_base b/config/common_base
index 11ac81e..5f230db 100644
--- a/config/common_base
+++ b/config/common_base
@@ -394,6 +394,7 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
 #
 CONFIG_RTE_LIBRTE_MBUF=y
 CONFIG_RTE_LIBRTE_MBUF_DEBUG=n
+CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="ring_mp_mc"
 CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
 CONFIG_RTE_PKTMBUF_HEADROOM=128
 
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 2ece742..8cf5436 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -153,6 +153,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	unsigned cache_size, uint16_t priv_size, uint16_t data_room_size,
 	int socket_id)
 {
+	struct rte_mempool *mp;
 	struct rte_pktmbuf_pool_private mbp_priv;
 	unsigned elt_size;
 
@@ -167,10 +168,27 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	mbp_priv.mbuf_data_room_size = data_room_size;
 	mbp_priv.mbuf_priv_size = priv_size;
 
-	return rte_mempool_create(name, n, elt_size,
-		cache_size, sizeof(struct rte_pktmbuf_pool_private),
-		rte_pktmbuf_pool_init, &mbp_priv, rte_pktmbuf_init, NULL,
-		socket_id, 0);
+	mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
+		 sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
+	if (mp == NULL)
+		return NULL;
+
+	rte_errno = rte_mempool_set_ops_byname(mp,
+			RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL);
+	if (rte_errno != 0) {
+		RTE_LOG(ERR, MBUF, "error setting mempool handler\n");
+		return NULL;
+	}
+	rte_pktmbuf_pool_init(mp, &mbp_priv);
+
+	if (rte_mempool_populate_default(mp) < 0) {
+		rte_mempool_free(mp);
+		return NULL;
+	}
+
+	rte_mempool_obj_iter(mp, rte_pktmbuf_init, NULL);
+
+	return mp;
 }
 
 /* do some sanity checks on a mbuf: panic if it fails */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 238+ messages in thread

* Re: [PATCH v16 0/3] mempool: add mempool handler feature
  2016-06-22  9:27                             ` [PATCH v16 " David Hunt
                                                 ` (2 preceding siblings ...)
  2016-06-22  9:27                               ` [PATCH v16 3/3] mbuf: make default mempool ops configurable at build David Hunt
@ 2016-06-23 21:22                               ` Thomas Monjalon
  2016-06-24  4:55                                 ` Wiles, Keith
  3 siblings, 1 reply; 238+ messages in thread
From: Thomas Monjalon @ 2016-06-23 21:22 UTC (permalink / raw)
  To: David Hunt; +Cc: dev, olivier.matz, viktorin, jerin.jacob, shreyansh.jain

> David Hunt (2):
>   mempool: support mempool handler operations
>   app/test: test mempool handler
>   mbuf: make default mempool ops configurable at build

Applied, thanks for the nice feature

I'm sorry David, the revision record is v17 ;)

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v16 0/3] mempool: add mempool handler feature
  2016-06-23 21:22                               ` [PATCH v16 0/3] mempool: add mempool handler feature Thomas Monjalon
@ 2016-06-24  4:55                                 ` Wiles, Keith
  2016-06-24 11:20                                   ` Jan Viktorin
  0 siblings, 1 reply; 238+ messages in thread
From: Wiles, Keith @ 2016-06-24  4:55 UTC (permalink / raw)
  To: Thomas Monjalon, Hunt, David
  Cc: dev, olivier.matz, viktorin, jerin.jacob, shreyansh.jain


On 6/23/16, 11:22 PM, "dev on behalf of Thomas Monjalon" <dev-bounces@dpdk.org on behalf of thomas.monjalon@6wind.com> wrote:

>> David Hunt (2):
>>   mempool: support mempool handler operations
>>   app/test: test mempool handler
>>   mbuf: make default mempool ops configurable at build
>
>Applied, thanks for the nice feature
>
>I'm sorry David, the revision record is v17 ;)

Quick David, make two more updates to the patch ☺

Thanks David and Great work !!!
>
>




^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v16 0/3] mempool: add mempool handler feature
  2016-06-24  4:55                                 ` Wiles, Keith
@ 2016-06-24 11:20                                   ` Jan Viktorin
  2016-06-24 11:24                                     ` Thomas Monjalon
  0 siblings, 1 reply; 238+ messages in thread
From: Jan Viktorin @ 2016-06-24 11:20 UTC (permalink / raw)
  To: Hunt, David
  Cc: Wiles, Keith, Thomas Monjalon, dev, olivier.matz, jerin.jacob,
	shreyansh.jain

On Fri, 24 Jun 2016 04:55:39 +0000
"Wiles, Keith" <keith.wiles@intel.com> wrote:

> On 6/23/16, 11:22 PM, "dev on behalf of Thomas Monjalon" <dev-bounces@dpdk.org on behalf of thomas.monjalon@6wind.com> wrote:
> 
> >> David Hunt (2):
> >>   mempool: support mempool handler operations
> >>   app/test: test mempool handler
> >>   mbuf: make default mempool ops configurable at build  
> >
> >Applied, thanks for the nice feature
> >
> >I'm sorry David, the revision record is v17 ;)  
> 
> Quick David, make two more updates to the patch ☺
> 
> Thanks David and Great work !!!
> >
> >  
> 

Hello David,

thanks for the patchset. I am sorry, I didn't have any time for DPDK this week
and didn't test it before applying. The current master produces the following
error in my regular builds:

  INSTALL-LIB librte_eal.a
== Build lib/librte_ring
  CC rte_ring.o
  AR librte_ring.a
  SYMLINK-FILE include/rte_ring.h
  INSTALL-LIB librte_ring.a
== Build lib/librte_mempool
  CC rte_mempool.o
make[3]: *** No rule to make target `rte_mempool_ops.o', needed by `librte_mempool.a'.  Stop.
make[2]: *** [librte_mempool] Error 2
make[1]: *** [lib] Error 2
make: *** [all] Error 2
Build step 'Execute shell' marked build as failure
[WARNINGS] Skipping publisher since build result is FAILURE

I have no idea about the reason at the moment. I'll check it soon.

Regards
Jan

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v16 0/3] mempool: add mempool handler feature
  2016-06-24 11:20                                   ` Jan Viktorin
@ 2016-06-24 11:24                                     ` Thomas Monjalon
  2016-06-24 13:10                                       ` Jan Viktorin
  0 siblings, 1 reply; 238+ messages in thread
From: Thomas Monjalon @ 2016-06-24 11:24 UTC (permalink / raw)
  To: Jan Viktorin
  Cc: Hunt, David, Wiles, Keith, dev, olivier.matz, jerin.jacob,
	shreyansh.jain

2016-06-24 13:20, Jan Viktorin:
> thanks for the patchset. I am sorry, I didn't have any time for DPDK this week
> and didn't test it before applying. The current master produces the following
> error in my regular builds:
> 
>   INSTALL-LIB librte_eal.a
> == Build lib/librte_ring
>   CC rte_ring.o
>   AR librte_ring.a
>   SYMLINK-FILE include/rte_ring.h
>   INSTALL-LIB librte_ring.a
> == Build lib/librte_mempool
>   CC rte_mempool.o
> make[3]: *** No rule to make target `rte_mempool_ops.o', needed by `librte_mempool.a'.  Stop.

It should be fixed now.

^ permalink raw reply	[flat|nested] 238+ messages in thread

* Re: [PATCH v16 0/3] mempool: add mempool handler feature
  2016-06-24 11:24                                     ` Thomas Monjalon
@ 2016-06-24 13:10                                       ` Jan Viktorin
  0 siblings, 0 replies; 238+ messages in thread
From: Jan Viktorin @ 2016-06-24 13:10 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Hunt, David, Wiles, Keith, dev, olivier.matz, jerin.jacob,
	shreyansh.jain

On Fri, 24 Jun 2016 13:24:56 +0200
Thomas Monjalon <thomas.monjalon@6wind.com> wrote:

> 2016-06-24 13:20, Jan Viktorin:
> > thanks for the patchset. I am sorry, I didn't have any time for DPDK this week
> > and didn't test it before applying. The current master produces the following
> > error in my regular builds:
> > 
> >   INSTALL-LIB librte_eal.a
> > == Build lib/librte_ring
> >   CC rte_ring.o
> >   AR librte_ring.a
> >   SYMLINK-FILE include/rte_ring.h
> >   INSTALL-LIB librte_ring.a
> > == Build lib/librte_mempool
> >   CC rte_mempool.o
> > make[3]: *** No rule to make target `rte_mempool_ops.o', needed by `librte_mempool.a'.  Stop.  
> 
> It should be fixed now.

OK, confirmed. It seems that I only receive notifications of failures :).

Jan

^ permalink raw reply	[flat|nested] 238+ messages in thread

end of thread, other threads:[~2016-06-24 13:17 UTC | newest]

Thread overview: 238+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-26 17:25 [PATCH 0/5] add external mempool manager David Hunt
2016-01-26 17:25 ` [PATCH 1/5] mempool: add external mempool manager support David Hunt
2016-01-28 17:52   ` Jerin Jacob
2016-02-03 14:16     ` Hunt, David
2016-02-04 13:23       ` Jerin Jacob
2016-02-04 14:52   ` Olivier MATZ
2016-02-04 16:47     ` Hunt, David
2016-02-08 11:02       ` Olivier MATZ
2016-02-04 17:34     ` Hunt, David
2016-02-05  9:26       ` Olivier MATZ
2016-03-01 13:32     ` Hunt, David
2016-03-04  9:05       ` Olivier MATZ
2016-03-08 10:04         ` Hunt, David
2016-01-26 17:25 ` [PATCH 2/5] memool: add stack (lifo) based external mempool handler David Hunt
2016-02-04 15:02   ` Olivier MATZ
2016-01-26 17:25 ` [PATCH 3/5] mempool: add custom external mempool handler example David Hunt
2016-01-28 17:54   ` Jerin Jacob
2016-01-26 17:25 ` [PATCH 4/5] mempool: add autotest for external mempool custom example David Hunt
2016-01-26 17:25 ` [PATCH 5/5] mempool: allow rte_pktmbuf_pool_create switch between memool handlers David Hunt
2016-02-05 10:11   ` Olivier MATZ
2016-01-28 17:26 ` [PATCH 0/5] add external mempool manager Jerin Jacob
2016-01-29 13:40   ` Hunt, David
2016-01-29 17:16     ` Jerin Jacob
2016-02-16 14:48 ` [PATCH 0/6] " David Hunt
2016-02-16 14:48   ` [PATCH 1/6] mempool: add external mempool manager support David Hunt
2016-02-16 19:27     ` [dpdk-dev, " Jan Viktorin
2016-02-19 13:30     ` [PATCH " Olivier MATZ
2016-02-29 11:11       ` Hunt, David
2016-03-04  9:04         ` Olivier MATZ
2016-02-16 14:48   ` [PATCH 2/6] mempool: add stack (lifo) based external mempool handler David Hunt
2016-02-19 13:31     ` Olivier MATZ
2016-02-29 11:04       ` Hunt, David
2016-03-04  9:04         ` Olivier MATZ
2016-03-08 20:45       ` Venkatesan, Venky
2016-03-09 14:53         ` Olivier MATZ
2016-02-16 14:48   ` [PATCH 3/6] mempool: adds a simple ring-based mempool handler using mallocs for objects David Hunt
2016-02-16 14:48   ` [PATCH 4/6] mempool: add autotest for external mempool custom example David Hunt
2016-02-16 14:48   ` [PATCH 5/6] mempool: allow rte_pktmbuf_pool_create switch between memool handlers David Hunt
2016-02-16 14:48   ` [PATCH 6/6] mempool: add in the RTE_NEXT_ABI protection for ABI breakages David Hunt
2016-02-19 13:33     ` Olivier MATZ
2016-02-19 13:25   ` [PATCH 0/6] external mempool manager Olivier MATZ
2016-02-29 10:55     ` Hunt, David
2016-03-09  9:50   ` [PATCH v3 0/4] " David Hunt
2016-03-09  9:50     ` [PATCH v3 1/4] mempool: add external mempool manager support David Hunt
2016-04-11 22:52       ` Yuanhan Liu
2016-03-09  9:50     ` [PATCH v3 2/4] mempool: add custom mempool handler example David Hunt
2016-03-09  9:50     ` [PATCH v3 3/4] mempool: allow rte_pktmbuf_pool_create switch between memool handlers David Hunt
2016-03-09 10:54       ` Panu Matilainen
2016-03-09 11:38         ` Hunt, David
2016-03-09 11:44           ` Panu Matilainen
2016-03-09  9:50     ` [PATCH v3 4/4] mempool: add in the RTE_NEXT_ABI for ABI breakages David Hunt
2016-03-09 10:46       ` Panu Matilainen
2016-03-09 11:30         ` Hunt, David
2016-03-09 14:59           ` Olivier MATZ
2016-03-09 16:28             ` Hunt, David
2016-03-09 16:31               ` Olivier MATZ
2016-03-09 16:39                 ` Hunt, David
2016-03-09 11:10     ` [PATCH v3 0/4] external mempool manager Hunt, David
2016-04-11 22:46     ` Yuanhan Liu
2016-04-14 13:57     ` [PATCH v4 0/3] " Olivier Matz
2016-04-14 13:57       ` [PATCH v4 1/3] mempool: support external handler Olivier Matz
2016-04-14 13:57       ` [PATCH v4 2/3] app/test: test external mempool handler Olivier Matz
2016-04-14 13:57       ` [PATCH v4 3/3] mbuf: get default mempool handler from configuration Olivier Matz
2016-05-19 13:44       ` mempool: external mempool manager David Hunt
2016-05-19 13:44         ` [PATCH v5 1/3] mempool: support external handler David Hunt
2016-05-23 12:35           ` [dpdk-dev,v5,1/3] " Jan Viktorin
2016-05-24 14:04             ` Hunt, David
2016-05-31  9:09             ` Hunt, David
2016-05-31 12:06               ` Jan Viktorin
2016-05-31 13:47                 ` Hunt, David
2016-05-31 20:40                   ` Olivier MATZ
2016-06-01  9:39                     ` Hunt, David
2016-06-01 12:30                     ` Jan Viktorin
2016-05-24 15:35           ` [PATCH v5 1/3] " Jerin Jacob
2016-05-27  9:52             ` Hunt, David
2016-05-27 10:33               ` Jerin Jacob
2016-05-27 14:44                 ` Hunt, David
2016-05-30  9:41                   ` Jerin Jacob
2016-05-30 11:27                     ` Hunt, David
2016-05-31  8:53                       ` Jerin Jacob
2016-05-31 15:37                         ` Hunt, David
2016-05-31 16:03                           ` Jerin Jacob
2016-05-31 20:41                             ` Olivier MATZ
2016-05-31 21:11                               ` Jerin Jacob
2016-06-01 10:46                                 ` Hunt, David
2016-06-01 11:18                                   ` Jerin Jacob
2016-05-19 13:45         ` [PATCH v5 2/3] app/test: test external mempool handler David Hunt
2016-05-23 12:45           ` [dpdk-dev, v5, " Jan Viktorin
2016-05-31  9:17             ` Hunt, David
2016-05-31 12:14               ` Jan Viktorin
2016-05-31 20:40                 ` Olivier MATZ
2016-05-19 13:45         ` [PATCH v5 3/3] mbuf: get default mempool handler from configuration David Hunt
2016-05-23 12:40           ` [dpdk-dev, v5, " Jan Viktorin
2016-05-31  9:26             ` Hunt, David
2016-06-01 16:19         ` [PATCH v6 0/5] mempool: add external mempool manager David Hunt
2016-06-01 16:19           ` [PATCH v6 1/5] mempool: support external handler David Hunt
2016-06-01 16:29             ` Hunt, David
2016-06-01 17:54             ` Jan Viktorin
2016-06-02  9:11               ` Hunt, David
2016-06-02 11:23               ` Hunt, David
2016-06-02 13:43                 ` Jan Viktorin
2016-06-01 16:19           ` [PATCH v6 2/5] mempool: remove rte_ring from rte_mempool struct David Hunt
2016-06-01 16:19           ` [PATCH v6 3/5] mempool: add default external mempool handler David Hunt
2016-06-01 16:19           ` [PATCH v6 4/5] app/test: test " David Hunt
2016-06-01 16:19           ` [PATCH v6 5/5] mbuf: get default mempool handler from configuration David Hunt
2016-06-02 13:27           ` [PATCH v7 0/5] mempool: add external mempool manager David Hunt
2016-06-02 13:27             ` [PATCH v7 1/5] mempool: support external mempool operations David Hunt
2016-06-02 13:38               ` [PATCH v7 0/5] mempool: add external mempool manager Hunt, David
2016-06-03  6:38               ` [PATCH v7 1/5] mempool: support external mempool operations Jerin Jacob
2016-06-03 10:28                 ` Hunt, David
2016-06-03 10:49                   ` Jerin Jacob
2016-06-03 11:07                   ` Olivier MATZ
2016-06-03 11:42                     ` Jan Viktorin
2016-06-03 12:10                     ` Hunt, David
2016-06-03 12:28               ` Olivier MATZ
2016-06-02 13:27             ` [PATCH v7 2/5] mempool: remove rte_ring from rte_mempool struct David Hunt
2016-06-03 12:28               ` Olivier MATZ
2016-06-03 14:17                 ` Hunt, David
2016-06-02 13:27             ` [PATCH v7 3/5] mempool: add default external mempool ops David Hunt
2016-06-02 13:27             ` [PATCH v7 4/5] app/test: test external mempool manager David Hunt
2016-06-02 13:27             ` [PATCH v7 5/5] mbuf: allow apps to change default mempool ops David Hunt
2016-06-03 12:28               ` Olivier MATZ
2016-06-03 14:06                 ` Hunt, David
2016-06-03 14:10                   ` Olivier Matz
2016-06-03 14:14                     ` Hunt, David
2016-06-03 14:58             ` [PATCH v8 0/5] mempool: add external mempool manager David Hunt
2016-06-03 14:58               ` [PATCH v8 1/3] mempool: support external mempool operations David Hunt
2016-06-06 14:32                 ` Shreyansh Jain
2016-06-06 14:38                 ` Shreyansh Jain
2016-06-07  9:25                   ` Hunt, David
2016-06-08 13:48                     ` Shreyansh Jain
2016-06-09  9:39                       ` Hunt, David
2016-06-09 10:31                         ` Jerin Jacob
2016-06-09 11:06                           ` Hunt, David
2016-06-09 11:49                           ` Shreyansh Jain
2016-06-09 12:30                             ` Jerin Jacob
2016-06-09 13:03                               ` Shreyansh Jain
2016-06-09 13:18                               ` Hunt, David
2016-06-09 13:37                                 ` Jerin Jacob
2016-06-09 11:41                         ` Shreyansh Jain
2016-06-09 12:55                           ` Hunt, David
2016-06-09 13:09                         ` Jan Viktorin
2016-06-10  7:29                           ` Olivier Matz
2016-06-10  8:49                             ` Jan Viktorin
2016-06-10  9:02                               ` Hunt, David
2016-06-10  9:34                             ` Hunt, David
2016-06-10 11:29                               ` Shreyansh Jain
2016-06-10 11:13                             ` Jerin Jacob
2016-06-10 11:37                             ` Shreyansh Jain
2016-06-07  9:05                 ` Shreyansh Jain
2016-06-08 12:13                 ` Olivier Matz
2016-06-09 10:33                   ` Hunt, David
2016-06-08 14:28                 ` Shreyansh Jain
2016-06-03 14:58               ` [PATCH v8 2/3] app/test: test external mempool manager David Hunt
2016-06-03 14:58               ` [PATCH v8 3/3] mbuf: make default mempool ops configurable at build David Hunt
2016-06-10 15:16               ` [PATCH v9 0/3] mempool: add external mempool manager David Hunt
2016-06-10 15:16                 ` [PATCH v9 1/3] mempool: support external mempool operations David Hunt
2016-06-13 12:16                   ` Olivier Matz
2016-06-13 13:46                     ` Hunt, David
2016-06-10 15:16                 ` [PATCH v9 2/3] app/test: test external mempool manager David Hunt
2016-06-10 15:16                 ` [PATCH v9 3/3] mbuf: make default mempool ops configurable at build David Hunt
2016-06-14  9:46                 ` [PATCH v10 0/3] mempool: add external mempool manager David Hunt
2016-06-14  9:46                   ` [PATCH v10 1/3] mempool: support external mempool operations David Hunt
2016-06-14 11:38                     ` Shreyansh Jain
2016-06-14 12:55                     ` Thomas Monjalon
2016-06-14 13:20                       ` Hunt, David
2016-06-14 13:29                         ` Thomas Monjalon
2016-06-14  9:46                   ` [PATCH v10 2/3] app/test: test external mempool manager David Hunt
2016-06-14 11:39                     ` Shreyansh Jain
2016-06-14  9:46                   ` [PATCH v10 3/3] mbuf: make default mempool ops configurable at build David Hunt
2016-06-14 11:45                     ` Shreyansh Jain
2016-06-14 12:32                   ` [PATCH v10 0/3] mempool: add external mempool manager Olivier MATZ
2016-06-14 15:48                   ` [PATCH v11 " David Hunt
2016-06-14 15:48                     ` [PATCH v11 1/3] mempool: support external mempool operations David Hunt
2016-06-14 16:08                       ` Thomas Monjalon
2016-06-14 15:49                     ` [PATCH v11 2/3] app/test: test external mempool manager David Hunt
2016-06-14 15:49                     ` [PATCH v11 3/3] mbuf: make default mempool ops configurable at build David Hunt
2016-06-15  7:47                     ` [PATCH v12 0/3] mempool: add external mempool manager David Hunt
2016-06-15  7:47                       ` [PATCH v12 1/3] mempool: support external mempool operations David Hunt
2016-06-15 10:14                         ` Jan Viktorin
2016-06-15 10:29                           ` Hunt, David
2016-06-15 11:26                             ` Jan Viktorin
2016-06-15 11:38                             ` Thomas Monjalon
2016-06-15  7:47                       ` [PATCH v12 2/3] app/test: test external mempool manager David Hunt
2016-06-15  7:47                       ` [PATCH v12 3/3] mbuf: make default mempool ops configurable at build David Hunt
2016-06-15 10:13                       ` [PATCH v12 0/3] mempool: add external mempool manager Jan Viktorin
2016-06-15 11:47                         ` Hunt, David
2016-06-15 12:03                           ` Olivier MATZ
2016-06-15 12:38                             ` Hunt, David
2016-06-15 13:50                               ` Olivier MATZ
2016-06-15 14:02                                 ` Hunt, David
2016-06-15 14:10                                   ` Olivier MATZ
2016-06-15 14:47                                     ` Jan Viktorin
2016-06-15 16:03                                       ` Hunt, David
2016-06-15 16:34                             ` Hunt, David
2016-06-15 16:40                               ` Olivier MATZ
2016-06-16  4:35                                 ` Shreyansh Jain
2016-06-16  7:04                                   ` Hunt, David
2016-06-16  7:47                                 ` Hunt, David
2016-06-16  8:47                                   ` Olivier MATZ
2016-06-16  8:55                                     ` Hunt, David
2016-06-16  8:58                                       ` Olivier MATZ
2016-06-16 11:34                                         ` Hunt, David
2016-06-16 12:30                       ` [PATCH v13 " David Hunt
2016-06-16 12:30                         ` [PATCH v13 1/3] mempool: support external mempool operations David Hunt
2016-06-17  6:58                           ` Hunt, David
2016-06-17  8:08                             ` Olivier Matz
2016-06-17  8:42                               ` Hunt, David
2016-06-17  9:09                                 ` Thomas Monjalon
2016-06-17  9:24                                   ` Hunt, David
2016-06-17 10:19                                     ` Olivier Matz
2016-06-17 10:18                           ` Olivier Matz
2016-06-17 10:47                             ` Hunt, David
2016-06-16 12:30                         ` [PATCH v13 2/3] app/test: test external mempool manager David Hunt
2016-06-16 12:30                         ` [PATCH v13 3/3] mbuf: make default mempool ops configurable at build David Hunt
2016-06-17 13:53                         ` [PATCH v14 0/3] mempool: add mempool handler feature David Hunt
2016-06-17 13:53                           ` [PATCH v14 1/3] mempool: support mempool handler operations David Hunt
2016-06-17 14:35                             ` Jan Viktorin
2016-06-19 11:44                               ` Hunt, David
2016-06-17 13:53                           ` [PATCH v14 2/3] app/test: test mempool handler David Hunt
2016-06-17 14:37                             ` Jan Viktorin
2016-06-17 13:53                           ` [PATCH v14 3/3] mbuf: make default mempool ops configurable at build David Hunt
2016-06-17 14:41                             ` Jan Viktorin
2016-06-19 12:05                           ` [PATCH v15 0/3] mempool: add mempool handler feature David Hunt
2016-06-19 12:05                             ` [PATCH v15 1/3] mempool: support mempool handler operations David Hunt
2016-06-19 12:05                             ` [PATCH v15 2/3] app/test: test mempool handler David Hunt
2016-06-19 12:05                             ` [PATCH v15 3/3] mbuf: make default mempool ops configurable at build David Hunt
2016-06-22  7:56                             ` [PATCH v15 0/3] mempool: add mempool handler feature Thomas Monjalon
2016-06-22  8:02                               ` Thomas Monjalon
2016-06-22  9:27                             ` [PATCH v16 " David Hunt
2016-06-22  9:27                               ` [PATCH v16 1/3] mempool: support mempool handler operations David Hunt
2016-06-22  9:27                               ` [PATCH v16 2/3] app/test: test mempool handler David Hunt
2016-06-22  9:27                               ` [PATCH v16 3/3] mbuf: make default mempool ops configurable at build David Hunt
2016-06-23 21:22                               ` [PATCH v16 0/3] mempool: add mempool handler feature Thomas Monjalon
2016-06-24  4:55                                 ` Wiles, Keith
2016-06-24 11:20                                   ` Jan Viktorin
2016-06-24 11:24                                     ` Thomas Monjalon
2016-06-24 13:10                                       ` Jan Viktorin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.