All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] Infrastructure to support octeontx HW mempool manager
@ 2017-06-21 17:32 Santosh Shukla
  2017-06-21 17:32 ` [PATCH 1/4] mempool: get the external mempool capability Santosh Shukla
                   ` (4 more replies)
  0 siblings, 5 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-06-21 17:32 UTC (permalink / raw)
  To: olivier.matz, dev
  Cc: thomas, hemant.agrawal, jerin.jacob, bruce.richardson, Santosh Shukla

In order to support octeontx HW mempool manager, the common mempool layer must
meet below condition.
- buffer start address should be block size aligned.
- buffers must have the physically contiguous address within the pool.

And right now mempool doesn't support both.

Patchset adds infrastrucure to support both condition in a _generic_ way.
Proposed solution won't effect existing mempool drivers or its functionality.

Summary:
Introducing capability flag. Now mempool drivers can advertise their
capabilities to common mempool layer(at the pool creation time).
Handlers are introduced in order to support capability flag.

Flags:
* MEMPOOL_F_POOL_CONTIG - If flag is set then Detect whether the buffer
has the physically contiguous address with in a hugepage.

* MEMPOOL_F_POOL_BLK_SZ_ALIGNED - If flag is set then make sure that buffer
addresses are block size aligned.

API:
Two handles are introduced:
* rte_mempool_ops_get_hw_cap - advertise HW mempool manager capability.
* rte_mempool_ops_update_range - Update pa/va start and end address range to
HW mempool manager.

Testing:
* Tested for x86_64 for rte_ring and stack.
* Tested for Octeontx HW mempool block.

Checkpatch status:
* Noticed false positive checkpatch warning:
WARNING: line over 80 characters
#30: FILE: lib/librte_mempool/rte_mempool.c:374:
+                       RTE_LOG(ERR, MEMPOOL, "nb_mbufs not fitting in one hugepage,..exit\n");

WARNING: line over 80 characters
#46: FILE: lib/librte_mempool/rte_mempool.h:269:
+#define MEMPOOL_F_POOL_CONTIG    0x0040 /**< Detect physcially contiguous objs */

total: 0 errors, 2 warnings, 21 lines checked

Thanks.

Santosh Shukla (4):
  mempool: get the external mempool capability
  mempool: detect physical contiguous object in pool
  mempool: introduce block size align flag
  mempool: update range info to pool

 lib/librte_mempool/rte_mempool.c           | 34 ++++++++++++++++++++--
 lib/librte_mempool/rte_mempool.h           | 46 ++++++++++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c       | 27 ++++++++++++++++++
 lib/librte_mempool/rte_mempool_version.map |  8 ++++++
 4 files changed, 112 insertions(+), 3 deletions(-)

-- 
2.13.0

^ permalink raw reply	[flat|nested] 116+ messages in thread

* [PATCH 1/4] mempool: get the external mempool capability
  2017-06-21 17:32 [PATCH 0/4] Infrastructure to support octeontx HW mempool manager Santosh Shukla
@ 2017-06-21 17:32 ` Santosh Shukla
  2017-07-03 16:37   ` Olivier Matz
  2017-06-21 17:32 ` [PATCH 2/4] mempool: detect physical contiguous object in pool Santosh Shukla
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 116+ messages in thread
From: Santosh Shukla @ 2017-06-21 17:32 UTC (permalink / raw)
  To: olivier.matz, dev
  Cc: thomas, hemant.agrawal, jerin.jacob, bruce.richardson, Santosh Shukla

Allow external mempool to advertise its capability.
A handler been introduced called rte_mempool_ops_get_hw_cap.
- Upon ->get_hw_cap call, mempool driver will advertise
capability by returning flag.
- Common layer updates flag value in 'mp->flags'.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 lib/librte_mempool/rte_mempool.c           |  5 +++++
 lib/librte_mempool/rte_mempool.h           | 20 ++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c       | 14 ++++++++++++++
 lib/librte_mempool/rte_mempool_version.map |  7 +++++++
 4 files changed, 46 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index f65310f60..045baef45 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -527,6 +527,11 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	if (mp->nb_mem_chunks != 0)
 		return -EEXIST;
 
+	/* Get external mempool capability */
+	ret = rte_mempool_ops_get_hw_cap(mp);
+	if (ret != -ENOENT)
+		mp->flags |= ret;
+
 	if (rte_xen_dom0_supported()) {
 		pg_sz = RTE_PGSIZE_2M;
 		pg_shift = rte_bsf32(pg_sz);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index a65f1a79d..c3cdc77e4 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -390,6 +390,12 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
  */
 typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
 
+/**
+ * Get the mempool hw capability.
+ */
+typedef int (*rte_mempool_get_hw_cap_t)(struct rte_mempool *mp);
+
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -398,6 +404,7 @@ struct rte_mempool_ops {
 	rte_mempool_enqueue_t enqueue;   /**< Enqueue an object. */
 	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
 	rte_mempool_get_count get_count; /**< Get qty of available objs. */
+	rte_mempool_get_hw_cap_t get_hw_cap; /**< Get hw capability */
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -509,6 +516,19 @@ rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
 unsigned
 rte_mempool_ops_get_count(const struct rte_mempool *mp);
 
+
+/**
+ * @internal wrapper for mempool_ops get_hw_cap callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   - On success; Valid capability flag.
+ *   - On failure; -ENOENT error code i.e. implementation not supported.
+ */
+int
+rte_mempool_ops_get_hw_cap(struct rte_mempool *mp);
+
 /**
  * @internal wrapper for mempool_ops free callback.
  *
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 5f24de250..3a09f5d32 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -85,6 +85,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->enqueue = h->enqueue;
 	ops->dequeue = h->dequeue;
 	ops->get_count = h->get_count;
+	ops->get_hw_cap = h->get_hw_cap;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -123,6 +124,19 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp)
 	return ops->get_count(mp);
 }
 
+/* wrapper to get external mempool capability. */
+int
+rte_mempool_ops_get_hw_cap(struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+	if (ops->get_hw_cap)
+		return ops->get_hw_cap(mp);
+
+	return -ENOENT;
+}
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index f9c079447..d92334672 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -41,3 +41,10 @@ DPDK_16.07 {
 	rte_mempool_set_ops_byname;
 
 } DPDK_2.0;
+
+DPDK_17.08 {
+	global:
+
+	rte_mempool_ops_get_hw_cap;
+
+} DPDK_17.05;
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 2/4] mempool: detect physical contiguous object in pool
  2017-06-21 17:32 [PATCH 0/4] Infrastructure to support octeontx HW mempool manager Santosh Shukla
  2017-06-21 17:32 ` [PATCH 1/4] mempool: get the external mempool capability Santosh Shukla
@ 2017-06-21 17:32 ` Santosh Shukla
  2017-07-03 16:37   ` Olivier Matz
  2017-06-21 17:32 ` [PATCH 3/4] mempool: introduce block size align flag Santosh Shukla
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 116+ messages in thread
From: Santosh Shukla @ 2017-06-21 17:32 UTC (permalink / raw)
  To: olivier.matz, dev
  Cc: thomas, hemant.agrawal, jerin.jacob, bruce.richardson, Santosh Shukla

HW mempool blocks may need physical contiguous obj in a pool.
Introducing MEMPOOL_F_POOL_CONTIG flag for such use-case.  The flag
useful to detect whether all buffer fits within a hugepage or not. If
not then return -ENOSPC. This way, we make sure that all object within a
pool is contiguous.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 lib/librte_mempool/rte_mempool.c | 8 ++++++++
 lib/librte_mempool/rte_mempool.h | 1 +
 2 files changed, 9 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 045baef45..7dec2f51d 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -368,6 +368,14 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 
+	/* Detect nb_mbuf fit in hugepage */
+	if (mp->flags & MEMPOOL_F_POOL_CONTIG) {
+		if (len < total_elt_sz * mp->size) {
+			RTE_LOG(ERR, MEMPOOL, "nb_mbufs not fitting in one hugepage,..exit\n");
+			return -ENOSPC;
+		}
+	}
+
 	memhdr = rte_zmalloc("MEMPOOL_MEMHDR", sizeof(*memhdr), 0);
 	if (memhdr == NULL)
 		return -ENOMEM;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index c3cdc77e4..fd8722e69 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -266,6 +266,7 @@ struct rte_mempool {
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
+#define MEMPOOL_F_POOL_CONTIG    0x0040 /**< Detect physcially contiguous objs */
 
 /**
  * @internal When debug is enabled, store some statistics.
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 3/4] mempool: introduce block size align flag
  2017-06-21 17:32 [PATCH 0/4] Infrastructure to support octeontx HW mempool manager Santosh Shukla
  2017-06-21 17:32 ` [PATCH 1/4] mempool: get the external mempool capability Santosh Shukla
  2017-06-21 17:32 ` [PATCH 2/4] mempool: detect physical contiguous object in pool Santosh Shukla
@ 2017-06-21 17:32 ` Santosh Shukla
  2017-07-03 16:37   ` Olivier Matz
  2017-06-21 17:32 ` [PATCH 4/4] mempool: update range info to pool Santosh Shukla
  2017-07-13  9:32 ` [PATCH v2 0/6] Infrastructure to support octeontx HW mempool manager Santosh Shukla
  4 siblings, 1 reply; 116+ messages in thread
From: Santosh Shukla @ 2017-06-21 17:32 UTC (permalink / raw)
  To: olivier.matz, dev
  Cc: thomas, hemant.agrawal, jerin.jacob, bruce.richardson, Santosh Shukla

Some mempool hw like octeontx/fpa block, demands block size aligned
buffer address.

Introducing an MEMPOOL_F_POOL_BLK_SZ_ALIGNED flag.
If this flag is set:
1) adjust 'off' value to block size aligned value.
2) Allocate one additional buffer. This buffer is used to make sure that
requested 'n' buffers get correctly populated to mempool.
Example:
	elem_sz = 2432 // total element size.
	n = 2111 // requested number of buffer.
	off = 2304 // new buf_offset value after step 1)
	vaddr = 0x0 // actual start address of pool
	pool_len = 5133952 // total pool length i.e.. (elem_sz * n)

Since 'off' is a non-zero value so below condition would fail for the
block size align case.

(((vaddr + off) + (elem_sz * n)) <= (vaddr + pool_len))

Which is incorrect behavior. Additional buffer will solve this
problem and correctly populate 'n' buffer to mempool for the aligned
mode.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 lib/librte_mempool/rte_mempool.c | 19 ++++++++++++++++---
 lib/librte_mempool/rte_mempool.h |  1 +
 2 files changed, 17 insertions(+), 3 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 7dec2f51d..2010857f0 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -350,7 +350,7 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 {
 	unsigned total_elt_sz;
 	unsigned i = 0;
-	size_t off;
+	size_t off, delta;
 	struct rte_mempool_memhdr *memhdr;
 	int ret;
 
@@ -387,7 +387,15 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	memhdr->free_cb = free_cb;
 	memhdr->opaque = opaque;
 
-	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+	if (mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED) {
+		delta = (uintptr_t)vaddr % total_elt_sz;
+		off = total_elt_sz - delta;
+		/* Validate alignment */
+		if (((uintptr_t)vaddr + off) % total_elt_sz) {
+			RTE_LOG(ERR, MEMPOOL, "vaddr(%p) not aligned to total_elt_sz(%u)\n", (vaddr + off), total_elt_sz);
+			return -EINVAL;
+		}
+	} else if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
 		off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
 	else
 		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
@@ -555,8 +563,13 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	}
 
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
+
 	for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) {
-		size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift);
+		if (mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED)
+			size = rte_mempool_xmem_size(n + 1, total_elt_sz,
+							pg_shift);
+		else
+			size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift);
 
 		ret = snprintf(mz_name, sizeof(mz_name),
 			RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index fd8722e69..99a20263d 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -267,6 +267,7 @@ struct rte_mempool {
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
 #define MEMPOOL_F_POOL_CONTIG    0x0040 /**< Detect physcially contiguous objs */
+#define MEMPOOL_F_POOL_BLK_SZ_ALIGNED 0x0080 /**< Align buffer address to block size*/
 
 /**
  * @internal When debug is enabled, store some statistics.
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH 4/4] mempool: update range info to pool
  2017-06-21 17:32 [PATCH 0/4] Infrastructure to support octeontx HW mempool manager Santosh Shukla
                   ` (2 preceding siblings ...)
  2017-06-21 17:32 ` [PATCH 3/4] mempool: introduce block size align flag Santosh Shukla
@ 2017-06-21 17:32 ` Santosh Shukla
  2017-07-13  9:32 ` [PATCH v2 0/6] Infrastructure to support octeontx HW mempool manager Santosh Shukla
  4 siblings, 0 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-06-21 17:32 UTC (permalink / raw)
  To: olivier.matz, dev
  Cc: thomas, hemant.agrawal, jerin.jacob, bruce.richardson, Santosh Shukla

HW pool manager e.g. Octeontx SoC demands s/w to program start and end
address of pool. Currently, there is no such handle in external mempool.
Introducing rte_mempool_update_range handle which will let HW(pool manager)
know when common layer selects hugepage:
For each hugepage - update its start/end address to HW pool manager.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 lib/librte_mempool/rte_mempool.c           |  2 ++
 lib/librte_mempool/rte_mempool.h           | 24 ++++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c       | 13 +++++++++++++
 lib/librte_mempool/rte_mempool_version.map |  1 +
 4 files changed, 40 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 2010857f0..f8249f6b2 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -354,6 +354,8 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	struct rte_mempool_memhdr *memhdr;
 	int ret;
 
+	rte_mempool_ops_update_range(mp, vaddr, paddr, len);
+
 	/* create the internal ring if not already done */
 	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
 		ret = rte_mempool_ops_alloc(mp);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 99a20263d..ad5bf6d3e 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -398,6 +398,12 @@ typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
 typedef int (*rte_mempool_get_hw_cap_t)(struct rte_mempool *mp);
 
 
+/**
+ * Update range info to mempool.
+ */
+typedef void (*rte_mempool_update_range_t)(struct rte_mempool *mp,
+		char *vaddr, phys_addr_t paddr, size_t len);
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -407,6 +413,7 @@ struct rte_mempool_ops {
 	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
 	rte_mempool_get_count get_count; /**< Get qty of available objs. */
 	rte_mempool_get_hw_cap_t get_hw_cap; /**< Get hw capability */
+	rte_mempool_update_range_t update_range; /**< Update range to mempool */
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -531,6 +538,23 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp);
 int
 rte_mempool_ops_get_hw_cap(struct rte_mempool *mp);
 
+
+/**
+ * @internal wrapper for mempool_ops update_range callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param vaddr
+ *   Pointer to the buffer virtual address
+ * @param paddr
+ *   Pointer to the buffer physical address
+ * @param len
+ *   Pool size
+ */
+void
+rte_mempool_ops_update_range(struct rte_mempool *mp,
+				char *vaddr, phys_addr_t paddr, size_t len);
+
 /**
  * @internal wrapper for mempool_ops free callback.
  *
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 3a09f5d32..a61707a2b 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -86,6 +86,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->dequeue = h->dequeue;
 	ops->get_count = h->get_count;
 	ops->get_hw_cap = h->get_hw_cap;
+	ops->update_range = h->update_range;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -137,6 +138,18 @@ rte_mempool_ops_get_hw_cap(struct rte_mempool *mp)
 	return -ENOENT;
 }
 
+/* wrapper to update range info to external mempool */
+void
+rte_mempool_ops_update_range(struct rte_mempool *mp, char *vaddr,
+				phys_addr_t paddr, size_t len)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+	if (ops->update_range)
+		ops->update_range(mp, vaddr, paddr, len);
+}
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index d92334672..fb9ac5c63 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -46,5 +46,6 @@ DPDK_17.08 {
 	global:
 
 	rte_mempool_ops_get_hw_cap;
+	rte_mempool_ops_update_range;
 
 } DPDK_17.05;
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* Re: [PATCH 1/4] mempool: get the external mempool capability
  2017-06-21 17:32 ` [PATCH 1/4] mempool: get the external mempool capability Santosh Shukla
@ 2017-07-03 16:37   ` Olivier Matz
  2017-07-05  6:41     ` santosh
  0 siblings, 1 reply; 116+ messages in thread
From: Olivier Matz @ 2017-07-03 16:37 UTC (permalink / raw)
  To: Santosh Shukla; +Cc: dev, thomas, hemant.agrawal, jerin.jacob, bruce.richardson

Hi Santosh,

On Wed, 21 Jun 2017 17:32:45 +0000, Santosh Shukla <santosh.shukla@caviumnetworks.com> wrote:
> Allow external mempool to advertise its capability.
> A handler been introduced called rte_mempool_ops_get_hw_cap.
> - Upon ->get_hw_cap call, mempool driver will advertise
> capability by returning flag.
> - Common layer updates flag value in 'mp->flags'.
> 
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>

I guess you've already seen the compilation issue when shared libs
are enabled:
http://dpdk.org/dev/patchwork/patch/25603



> ---
>  lib/librte_mempool/rte_mempool.c           |  5 +++++
>  lib/librte_mempool/rte_mempool.h           | 20 ++++++++++++++++++++
>  lib/librte_mempool/rte_mempool_ops.c       | 14 ++++++++++++++
>  lib/librte_mempool/rte_mempool_version.map |  7 +++++++
>  4 files changed, 46 insertions(+)
> 
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index f65310f60..045baef45 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -527,6 +527,11 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>  	if (mp->nb_mem_chunks != 0)
>  		return -EEXIST;
>  
> +	/* Get external mempool capability */
> +	ret = rte_mempool_ops_get_hw_cap(mp);

"hw" can be removed since some handlers are software (the other occurences
of hw should be removed too)

"capabilities" is clearer than "cap"

So I suggest rte_mempool_ops_get_capabilities() instead
With this name, the comment above becomes overkill...

> +	if (ret != -ENOENT)

-ENOTSUP looks more appropriate (like in ethdev)

> +		mp->flags |= ret;

I'm wondering if these capability flags should be mixed with
other mempool flags.

We can maybe remove this code above and directly call
rte_mempool_ops_get_capabilities() when we need to get them.



> +
>  	if (rte_xen_dom0_supported()) {
>  		pg_sz = RTE_PGSIZE_2M;
>  		pg_shift = rte_bsf32(pg_sz);
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index a65f1a79d..c3cdc77e4 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -390,6 +390,12 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
>   */
>  typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
>  
> +/**
> + * Get the mempool hw capability.
> + */
> +typedef int (*rte_mempool_get_hw_cap_t)(struct rte_mempool *mp);
> +
> +

If possible, use "const struct rte_mempool *mp"

Since flags are unsigned, I would also prefer a function returning an
int (0 on success, negative on error) and writing to an unsigned pointer
provided by the user.



>  /** Structure defining mempool operations structure */
>  struct rte_mempool_ops {
>  	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
> @@ -398,6 +404,7 @@ struct rte_mempool_ops {
>  	rte_mempool_enqueue_t enqueue;   /**< Enqueue an object. */
>  	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
>  	rte_mempool_get_count get_count; /**< Get qty of available objs. */
> +	rte_mempool_get_hw_cap_t get_hw_cap; /**< Get hw capability */
>  } __rte_cache_aligned;
>  
>  #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
> @@ -509,6 +516,19 @@ rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
>  unsigned
>  rte_mempool_ops_get_count(const struct rte_mempool *mp);
>  
> +
> +/**
> + * @internal wrapper for mempool_ops get_hw_cap callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @return
> + *   - On success; Valid capability flag.
> + *   - On failure; -ENOENT error code i.e. implementation not supported.

The possible values for the capability flags should be better described.


> + */
> +int
> +rte_mempool_ops_get_hw_cap(struct rte_mempool *mp);
> +
>  /**
>   * @internal wrapper for mempool_ops free callback.
>   *
> diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
> index 5f24de250..3a09f5d32 100644
> --- a/lib/librte_mempool/rte_mempool_ops.c
> +++ b/lib/librte_mempool/rte_mempool_ops.c
> @@ -85,6 +85,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
>  	ops->enqueue = h->enqueue;
>  	ops->dequeue = h->dequeue;
>  	ops->get_count = h->get_count;
> +	ops->get_hw_cap = h->get_hw_cap;
>  
>  	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
>  
> @@ -123,6 +124,19 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp)
>  	return ops->get_count(mp);
>  }
>  
> +/* wrapper to get external mempool capability. */
> +int
> +rte_mempool_ops_get_hw_cap(struct rte_mempool *mp)
> +{
> +	struct rte_mempool_ops *ops;
> +
> +	ops = rte_mempool_get_ops(mp->ops_index);
> +	if (ops->get_hw_cap)
> +		return ops->get_hw_cap(mp);
> +
> +	return -ENOENT;
> +}
> +

RTE_FUNC_PTR_OR_ERR_RET() can be used


>  /* sets mempool ops previously registered by rte_mempool_register_ops. */
>  int
>  rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
> diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
> index f9c079447..d92334672 100644
> --- a/lib/librte_mempool/rte_mempool_version.map
> +++ b/lib/librte_mempool/rte_mempool_version.map
> @@ -41,3 +41,10 @@ DPDK_16.07 {
>  	rte_mempool_set_ops_byname;
>  
>  } DPDK_2.0;
> +
> +DPDK_17.08 {
> +	global:
> +
> +	rte_mempool_ops_get_hw_cap;
> +
> +} DPDK_17.05;


/usr/bin/ld: unable to find version dependency `DPDK_17.05'
This should be 16.07 here

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 2/4] mempool: detect physical contiguous object in pool
  2017-06-21 17:32 ` [PATCH 2/4] mempool: detect physical contiguous object in pool Santosh Shukla
@ 2017-07-03 16:37   ` Olivier Matz
  2017-07-05  7:07     ` santosh
  0 siblings, 1 reply; 116+ messages in thread
From: Olivier Matz @ 2017-07-03 16:37 UTC (permalink / raw)
  To: Santosh Shukla; +Cc: dev, thomas, hemant.agrawal, jerin.jacob, bruce.richardson

On Wed, 21 Jun 2017 17:32:46 +0000, Santosh Shukla <santosh.shukla@caviumnetworks.com> wrote:
> HW mempool blocks may need physical contiguous obj in a pool.

This should be clarified: the memory area containing all the
objects must be physically contiguous, right?

> Introducing MEMPOOL_F_POOL_CONTIG flag for such use-case.  The flag
> useful to detect whether all buffer fits within a hugepage or not. If
> not then return -ENOSPC. This way, we make sure that all object within a
> pool is contiguous.
> 
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> ---
>  lib/librte_mempool/rte_mempool.c | 8 ++++++++
>  lib/librte_mempool/rte_mempool.h | 1 +
>  2 files changed, 9 insertions(+)
> 
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index 045baef45..7dec2f51d 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -368,6 +368,14 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
>  
>  	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
>  
> +	/* Detect nb_mbuf fit in hugepage */
> +	if (mp->flags & MEMPOOL_F_POOL_CONTIG) {
> +		if (len < total_elt_sz * mp->size) {
> +			RTE_LOG(ERR, MEMPOOL, "nb_mbufs not fitting in one hugepage,..exit\n");
> +			return -ENOSPC;
> +		}
> +	}
> +

We should not reference mbuf, we are in mempool code, dealing with
any kind of object.

Also, len is not necessarily the size of a hugepage, but the size of the
physical area passed to te_mempool_populate_phys().

>  	memhdr = rte_zmalloc("MEMPOOL_MEMHDR", sizeof(*memhdr), 0);
>  	if (memhdr == NULL)
>  		return -ENOMEM;
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index c3cdc77e4..fd8722e69 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -266,6 +266,7 @@ struct rte_mempool {
>  #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
>  #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
>  #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
> +#define MEMPOOL_F_POOL_CONTIG    0x0040 /**< Detect physcially contiguous objs */

We must highlight here that it's a capability flag.
Following my other comments on the first patch, this define should be
renamed in something else. I suggest:

#define RTE_MEMPOOL_CAPA_PHYS_CONTIG 0x0001

The description should be longer and more accurate.

I'm also a bit puzzled because this is more a limitation than a
capability.

>  
>  /**
>   * @internal When debug is enabled, store some statistics.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 3/4] mempool: introduce block size align flag
  2017-06-21 17:32 ` [PATCH 3/4] mempool: introduce block size align flag Santosh Shukla
@ 2017-07-03 16:37   ` Olivier Matz
  2017-07-05  7:35     ` santosh
  0 siblings, 1 reply; 116+ messages in thread
From: Olivier Matz @ 2017-07-03 16:37 UTC (permalink / raw)
  To: Santosh Shukla; +Cc: dev, thomas, hemant.agrawal, jerin.jacob, bruce.richardson

On Wed, 21 Jun 2017 17:32:47 +0000, Santosh Shukla <santosh.shukla@caviumnetworks.com> wrote:
> Some mempool hw like octeontx/fpa block, demands block size aligned
> buffer address.
> 

What is the meaning of block size aligned?

Does it mean that the address has to be a multiple of total_elt_size?

Is this constraint on the virtual address only?


> Introducing an MEMPOOL_F_POOL_BLK_SZ_ALIGNED flag.
> If this flag is set:
> 1) adjust 'off' value to block size aligned value.
> 2) Allocate one additional buffer. This buffer is used to make sure that
> requested 'n' buffers get correctly populated to mempool.
> Example:
> 	elem_sz = 2432 // total element size.
> 	n = 2111 // requested number of buffer.
> 	off = 2304 // new buf_offset value after step 1)
> 	vaddr = 0x0 // actual start address of pool
> 	pool_len = 5133952 // total pool length i.e.. (elem_sz * n)
> 
> Since 'off' is a non-zero value so below condition would fail for the
> block size align case.
> 
> (((vaddr + off) + (elem_sz * n)) <= (vaddr + pool_len))
> 
> Which is incorrect behavior. Additional buffer will solve this
> problem and correctly populate 'n' buffer to mempool for the aligned
> mode.

Sorry, but the example is not very clear.


> 
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> ---
>  lib/librte_mempool/rte_mempool.c | 19 ++++++++++++++++---
>  lib/librte_mempool/rte_mempool.h |  1 +
>  2 files changed, 17 insertions(+), 3 deletions(-)
> 
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index 7dec2f51d..2010857f0 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -350,7 +350,7 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
>  {
>  	unsigned total_elt_sz;
>  	unsigned i = 0;
> -	size_t off;
> +	size_t off, delta;
>  	struct rte_mempool_memhdr *memhdr;
>  	int ret;
>  
> @@ -387,7 +387,15 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
>  	memhdr->free_cb = free_cb;
>  	memhdr->opaque = opaque;
>  
> -	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
> +	if (mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED) {
> +		delta = (uintptr_t)vaddr % total_elt_sz;
> +		off = total_elt_sz - delta;
> +		/* Validate alignment */
> +		if (((uintptr_t)vaddr + off) % total_elt_sz) {
> +			RTE_LOG(ERR, MEMPOOL, "vaddr(%p) not aligned to total_elt_sz(%u)\n", (vaddr + off), total_elt_sz);
> +			return -EINVAL;
> +		}
> +	} else if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
>  		off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
>  	else
>  		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;

What is the purpose of this test? Can it fail?

Not sure having the delta variable is helpful. However, adding a
small comment like this could help:

	/* align object start address to a multiple of total_elt_sz */
	off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
	
About style, please don't mix brackets and no-bracket blocks in the
same if/elseif/else.

> @@ -555,8 +563,13 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>  	}
>  
>  	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
> +
>  	for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) {
> -		size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift);
> +		if (mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED)
> +			size = rte_mempool_xmem_size(n + 1, total_elt_sz,
> +							pg_shift);
> +		else
> +			size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift);
>  
>  		ret = snprintf(mz_name, sizeof(mz_name),
>  			RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id);


One issue I see here is that this new flag breaks the function
rte_mempool_xmem_size(), which calculates the maximum amount of memory
required to store a given number of objects.

It also probably breaks rte_mempool_xmem_usage().

I don't have any good solution for now. A possibility is to change
the behavior of these functions for everyone, meaning that we will
always reserve more memory that really required. If this is done on
every memory chunk (struct rte_mempool_memhdr), it can eat a lot
of memory.

Another approach would be to change the API of this function to
pass the capability flags, or the mempool pointer... but there is
a problem because these functions are usually called before the
mempool is instanciated.


> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index fd8722e69..99a20263d 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -267,6 +267,7 @@ struct rte_mempool {
>  #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
>  #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
>  #define MEMPOOL_F_POOL_CONTIG    0x0040 /**< Detect physcially contiguous objs */
> +#define MEMPOOL_F_POOL_BLK_SZ_ALIGNED 0x0080 /**< Align buffer address to block size*/
>  
>  /**
>   * @internal When debug is enabled, store some statistics.

Same comment than for patch 3: the explanation should really be clarified.
It's a hw specific limitation, which won't be obvious for the people that
will read that code, so we must document it as clear as possible.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 1/4] mempool: get the external mempool capability
  2017-07-03 16:37   ` Olivier Matz
@ 2017-07-05  6:41     ` santosh
  2017-07-10 13:55       ` Olivier Matz
  0 siblings, 1 reply; 116+ messages in thread
From: santosh @ 2017-07-05  6:41 UTC (permalink / raw)
  To: Olivier Matz; +Cc: dev, thomas, hemant.agrawal, jerin.jacob, bruce.richardson

Hi Olivier,

On Monday 03 July 2017 10:07 PM, Olivier Matz wrote:

> Hi Santosh,
>
> On Wed, 21 Jun 2017 17:32:45 +0000, Santosh Shukla <santosh.shukla@caviumnetworks.com> wrote:
>> Allow external mempool to advertise its capability.
>> A handler been introduced called rte_mempool_ops_get_hw_cap.
>> - Upon ->get_hw_cap call, mempool driver will advertise
>> capability by returning flag.
>> - Common layer updates flag value in 'mp->flags'.
>>
>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> I guess you've already seen the compilation issue when shared libs
> are enabled:
> http://dpdk.org/dev/patchwork/patch/25603
>
Yes, Will fix in v2.

>
>> ---
>>  lib/librte_mempool/rte_mempool.c           |  5 +++++
>>  lib/librte_mempool/rte_mempool.h           | 20 ++++++++++++++++++++
>>  lib/librte_mempool/rte_mempool_ops.c       | 14 ++++++++++++++
>>  lib/librte_mempool/rte_mempool_version.map |  7 +++++++
>>  4 files changed, 46 insertions(+)
>>
>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
>> index f65310f60..045baef45 100644
>> --- a/lib/librte_mempool/rte_mempool.c
>> +++ b/lib/librte_mempool/rte_mempool.c
>> @@ -527,6 +527,11 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>>  	if (mp->nb_mem_chunks != 0)
>>  		return -EEXIST;
>>  
>> +	/* Get external mempool capability */
>> +	ret = rte_mempool_ops_get_hw_cap(mp);
> "hw" can be removed since some handlers are software (the other occurences
> of hw should be removed too)
>
> "capabilities" is clearer than "cap"
>
> So I suggest rte_mempool_ops_get_capabilities() instead
> With this name, the comment above becomes overkill...

ok. Will take care in v2.

>> +	if (ret != -ENOENT)
> -ENOTSUP looks more appropriate (like in ethdev)
>
imo: -ENOENT tell that driver has no new entry for capability flag(mp->flag).
But no strong opinion for -ENOTSUP.

>> +		mp->flags |= ret;
> I'm wondering if these capability flags should be mixed with
> other mempool flags.
>
> We can maybe remove this code above and directly call
> rte_mempool_ops_get_capabilities() when we need to get them.

0) Treating this capability flag different vs existing RTE_MEMPOLL_F would
result to adding new flag entry in struct rte_mempool { .drv_flag} for example.
1) That new flag entry will break ABI.
2) In-fact application can benefit this capability flag by explicitly setting
in pool create api (e.g: rte_mempool_create_empty (, , , , , _F_POOL_CONGIG | F_BLK_SZ_ALIGNED)).

Those flag use-case not limited till driver scope, application too can benefit.

3) Also provided that we have space in RTE_MEMPOOL_F_XX area, so adding couple of
more bit won't impact design or effect pool creation sequence.

4) By calling _ops_get_capability() at _populate_default() area would address issues pointed by
you at patch [3/4]. Will explain details on ' how' in respective patch [3/4].

5) Above all, Intent is to make sure that common layer managing capability flag 
on behalf of driver or application.

>
>
>> +
>>  	if (rte_xen_dom0_supported()) {
>>  		pg_sz = RTE_PGSIZE_2M;
>>  		pg_shift = rte_bsf32(pg_sz);
>> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
>> index a65f1a79d..c3cdc77e4 100644
>> --- a/lib/librte_mempool/rte_mempool.h
>> +++ b/lib/librte_mempool/rte_mempool.h
>> @@ -390,6 +390,12 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
>>   */
>>  typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
>>  
>> +/**
>> + * Get the mempool hw capability.
>> + */
>> +typedef int (*rte_mempool_get_hw_cap_t)(struct rte_mempool *mp);
>> +
>> +
> If possible, use "const struct rte_mempool *mp"
>
> Since flags are unsigned, I would also prefer a function returning an
> int (0 on success, negative on error) and writing to an unsigned pointer
> provided by the user.
>
confused? mp->flag is int not unsigned. and We're returning
-ENOENT/-ENOTSUP at error and positive value in-case driver supports capability.

>
>>  /** Structure defining mempool operations structure */
>>  struct rte_mempool_ops {
>>  	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
>> @@ -398,6 +404,7 @@ struct rte_mempool_ops {
>>  	rte_mempool_enqueue_t enqueue;   /**< Enqueue an object. */
>>  	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
>>  	rte_mempool_get_count get_count; /**< Get qty of available objs. */
>> +	rte_mempool_get_hw_cap_t get_hw_cap; /**< Get hw capability */
>>  } __rte_cache_aligned;
>>  
>>  #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
>> @@ -509,6 +516,19 @@ rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
>>  unsigned
>>  rte_mempool_ops_get_count(const struct rte_mempool *mp);
>>  
>> +
>> +/**
>> + * @internal wrapper for mempool_ops get_hw_cap callback.
>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + * @return
>> + *   - On success; Valid capability flag.
>> + *   - On failure; -ENOENT error code i.e. implementation not supported.
> The possible values for the capability flags should be better described.
>
ok,

>> + */
>> +int
>> +rte_mempool_ops_get_hw_cap(struct rte_mempool *mp);
>> +
>>  /**
>>   * @internal wrapper for mempool_ops free callback.
>>   *
>> diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
>> index 5f24de250..3a09f5d32 100644
>> --- a/lib/librte_mempool/rte_mempool_ops.c
>> +++ b/lib/librte_mempool/rte_mempool_ops.c
>> @@ -85,6 +85,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
>>  	ops->enqueue = h->enqueue;
>>  	ops->dequeue = h->dequeue;
>>  	ops->get_count = h->get_count;
>> +	ops->get_hw_cap = h->get_hw_cap;
>>  
>>  	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
>>  
>> @@ -123,6 +124,19 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp)
>>  	return ops->get_count(mp);
>>  }
>>  
>> +/* wrapper to get external mempool capability. */
>> +int
>> +rte_mempool_ops_get_hw_cap(struct rte_mempool *mp)
>> +{
>> +	struct rte_mempool_ops *ops;
>> +
>> +	ops = rte_mempool_get_ops(mp->ops_index);
>> +	if (ops->get_hw_cap)
>> +		return ops->get_hw_cap(mp);
>> +
>> +	return -ENOENT;
>> +}
>> +
> RTE_FUNC_PTR_OR_ERR_RET() can be used

in v2.

>
>>  /* sets mempool ops previously registered by rte_mempool_register_ops. */
>>  int
>>  rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
>> diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
>> index f9c079447..d92334672 100644
>> --- a/lib/librte_mempool/rte_mempool_version.map
>> +++ b/lib/librte_mempool/rte_mempool_version.map
>> @@ -41,3 +41,10 @@ DPDK_16.07 {
>>  	rte_mempool_set_ops_byname;
>>  
>>  } DPDK_2.0;
>> +
>> +DPDK_17.08 {
>> +	global:
>> +
>> +	rte_mempool_ops_get_hw_cap;
>> +
>> +} DPDK_17.05;
>
> /usr/bin/ld: unable to find version dependency `DPDK_17.05'
> This should be 16.07 here
>
Will fix that in v2.
Thanks.

>

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 2/4] mempool: detect physical contiguous object in pool
  2017-07-03 16:37   ` Olivier Matz
@ 2017-07-05  7:07     ` santosh
  0 siblings, 0 replies; 116+ messages in thread
From: santosh @ 2017-07-05  7:07 UTC (permalink / raw)
  To: Olivier Matz; +Cc: dev, thomas, hemant.agrawal, jerin.jacob, bruce.richardson

Hi Olivier,

On Monday 03 July 2017 10:07 PM, Olivier Matz wrote:

> On Wed, 21 Jun 2017 17:32:46 +0000, Santosh Shukla <santosh.shukla@caviumnetworks.com> wrote:
>> HW mempool blocks may need physical contiguous obj in a pool.
> This should be clarified: the memory area containing all the
> objects must be physically contiguous, right?

ok.

>> Introducing MEMPOOL_F_POOL_CONTIG flag for such use-case.  The flag
>> useful to detect whether all buffer fits within a hugepage or not. If
>> not then return -ENOSPC. This way, we make sure that all object within a
>> pool is contiguous.
>>
>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>> ---
>>  lib/librte_mempool/rte_mempool.c | 8 ++++++++
>>  lib/librte_mempool/rte_mempool.h | 1 +
>>  2 files changed, 9 insertions(+)
>>
>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
>> index 045baef45..7dec2f51d 100644
>> --- a/lib/librte_mempool/rte_mempool.c
>> +++ b/lib/librte_mempool/rte_mempool.c
>> @@ -368,6 +368,14 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
>>  
>>  	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
>>  
>> +	/* Detect nb_mbuf fit in hugepage */
>> +	if (mp->flags & MEMPOOL_F_POOL_CONTIG) {
>> +		if (len < total_elt_sz * mp->size) {
>> +			RTE_LOG(ERR, MEMPOOL, "nb_mbufs not fitting in one hugepage,..exit\n");
>> +			return -ENOSPC;
>> +		}
>> +	}
>> +
> We should not reference mbuf, we are in mempool code, dealing with
> any kind of object.

ok, in v2.

> Also, len is not necessarily the size of a hugepage, but the size of the
> physical area passed to te_mempool_populate_phys().

The idea is to make sure that blk_sz (total_elt_sz * mp->size) fits with in
hugepage. So if rte_eal_has_hugepages() is true then 'len' is
hugepage size, in-case of non-hugepage this condition would fail.

Does that make sense? if so then I'll modify comment and error log.
Otherwise could you pl. suggest better approach to detect phys contiguity.

>>  	memhdr = rte_zmalloc("MEMPOOL_MEMHDR", sizeof(*memhdr), 0);
>>  	if (memhdr == NULL)
>>  		return -ENOMEM;
>> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
>> index c3cdc77e4..fd8722e69 100644
>> --- a/lib/librte_mempool/rte_mempool.h
>> +++ b/lib/librte_mempool/rte_mempool.h
>> @@ -266,6 +266,7 @@ struct rte_mempool {
>>  #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
>>  #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
>>  #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
>> +#define MEMPOOL_F_POOL_CONTIG    0x0040 /**< Detect physcially contiguous objs */
> We must highlight here that it's a capability flag.
> Following my other comments on the first patch, this define should be
> renamed in something else. I suggest:
>
> #define RTE_MEMPOOL_CAPA_PHYS_CONTIG 0x0001
>
> The description should be longer and more accurate.
>
> I'm also a bit puzzled because this is more a limitation than a
> capability.

ok with renaming flag but per my [1/4] comment.

But i find it makes more sense to - not differentiate PHYS_CONTIG
with existing mempool cap flags.

>>  
>>  /**
>>   * @internal When debug is enabled, store some statistics.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 3/4] mempool: introduce block size align flag
  2017-07-03 16:37   ` Olivier Matz
@ 2017-07-05  7:35     ` santosh
  2017-07-10 13:15       ` Olivier Matz
  0 siblings, 1 reply; 116+ messages in thread
From: santosh @ 2017-07-05  7:35 UTC (permalink / raw)
  To: Olivier Matz; +Cc: dev, thomas, hemant.agrawal, jerin.jacob, bruce.richardson

Hi Olivier,

On Monday 03 July 2017 10:07 PM, Olivier Matz wrote:

> On Wed, 21 Jun 2017 17:32:47 +0000, Santosh Shukla <santosh.shukla@caviumnetworks.com> wrote:
>> Some mempool hw like octeontx/fpa block, demands block size aligned
>> buffer address.
>>
> What is the meaning of block size aligned?

block size is total_elem_sz.

> Does it mean that the address has to be a multiple of total_elt_size?

yes.

> Is this constraint on the virtual address only?
>
both.

>> Introducing an MEMPOOL_F_POOL_BLK_SZ_ALIGNED flag.
>> If this flag is set:
>> 1) adjust 'off' value to block size aligned value.
>> 2) Allocate one additional buffer. This buffer is used to make sure that
>> requested 'n' buffers get correctly populated to mempool.
>> Example:
>> 	elem_sz = 2432 // total element size.
>> 	n = 2111 // requested number of buffer.
>> 	off = 2304 // new buf_offset value after step 1)
>> 	vaddr = 0x0 // actual start address of pool
>> 	pool_len = 5133952 // total pool length i.e.. (elem_sz * n)
>>
>> Since 'off' is a non-zero value so below condition would fail for the
>> block size align case.
>>
>> (((vaddr + off) + (elem_sz * n)) <= (vaddr + pool_len))
>>
>> Which is incorrect behavior. Additional buffer will solve this
>> problem and correctly populate 'n' buffer to mempool for the aligned
>> mode.
> Sorry, but the example is not very clear.
>
which part?

I'll try to reword.

The problem statement is:
- We want start of buffer address aligned to block_sz aka total_elt_sz.

Proposed solution in this patch:
- Let's say that we get 'x' size of memory chunk from memzone.
- Ideally we start using buffer at address 0 to...(x-block_sz).
- Not necessarily first buffer address i.e. 0 is aligned to block_sz.
- So we derive offset value for block_sz alignment purpose i.e..'off' . 
- That 'off' makes sure that first va/pa address of buffer is blk_sz aligned.
- Calculating 'off' may end up sacrificing first buffer of pool. So total
number of buffer in pool is n-1, Which is incorrect behavior, Thats why
we add 1 addition buffer. We request memzone to allocate (n+1 * total_elt_sz) pool
area when F_BLK_SZ_ALIGNED flag is set.

>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>> ---
>>  lib/librte_mempool/rte_mempool.c | 19 ++++++++++++++++---
>>  lib/librte_mempool/rte_mempool.h |  1 +
>>  2 files changed, 17 insertions(+), 3 deletions(-)
>>
>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
>> index 7dec2f51d..2010857f0 100644
>> --- a/lib/librte_mempool/rte_mempool.c
>> +++ b/lib/librte_mempool/rte_mempool.c
>> @@ -350,7 +350,7 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
>>  {
>>  	unsigned total_elt_sz;
>>  	unsigned i = 0;
>> -	size_t off;
>> +	size_t off, delta;
>>  	struct rte_mempool_memhdr *memhdr;
>>  	int ret;
>>  
>> @@ -387,7 +387,15 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
>>  	memhdr->free_cb = free_cb;
>>  	memhdr->opaque = opaque;
>>  
>> -	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
>> +	if (mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED) {
>> +		delta = (uintptr_t)vaddr % total_elt_sz;
>> +		off = total_elt_sz - delta;
>> +		/* Validate alignment */
>> +		if (((uintptr_t)vaddr + off) % total_elt_sz) {
>> +			RTE_LOG(ERR, MEMPOOL, "vaddr(%p) not aligned to total_elt_sz(%u)\n", (vaddr + off), total_elt_sz);
>> +			return -EINVAL;
>> +		}
>> +	} else if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
>>  		off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
>>  	else
>>  		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
> What is the purpose of this test? Can it fail?

Purpose is to sanity check blk_sz alignment. No it won;t fail.
I thought better to keep sanity check but if you see no value
then will remove in v2?

> Not sure having the delta variable is helpful. However, adding a
> small comment like this could help:
>
> 	/* align object start address to a multiple of total_elt_sz */
> 	off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
> 	
> About style, please don't mix brackets and no-bracket blocks in the
> same if/elseif/else.

ok, in v2. 

>> @@ -555,8 +563,13 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>>  	}
>>  
>>  	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
>> +
>>  	for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) {
>> -		size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift);
>> +		if (mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED)
>> +			size = rte_mempool_xmem_size(n + 1, total_elt_sz,
>> +							pg_shift);
>> +		else
>> +			size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift);
>>  
>>  		ret = snprintf(mz_name, sizeof(mz_name),
>>  			RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id);
>
> One issue I see here is that this new flag breaks the function
> rte_mempool_xmem_size(), which calculates the maximum amount of memory
> required to store a given number of objects.
>
> It also probably breaks rte_mempool_xmem_usage().
>
> I don't have any good solution for now. A possibility is to change
> the behavior of these functions for everyone, meaning that we will
> always reserve more memory that really required. If this is done on
> every memory chunk (struct rte_mempool_memhdr), it can eat a lot
> of memory.
>
> Another approach would be to change the API of this function to
> pass the capability flags, or the mempool pointer... but there is
> a problem because these functions are usually called before the
> mempool is instanciated.
>
Per my description on [1/4]. If we agree to call
_ops_get_capability() at very beginning i.e.. at _populate_default()
then 'mp->flag' has capability flag. and We could add one more argument
in _xmem_size( , flag)/_xmem_usage(, flag). 
- xmem_size / xmem_usage() to check for that capability bit in 'flag'.
- if set then increase 'elt_num' by num.

That way your approach 2) make sense to me and it will very well fit
in design. Won't waste memory like you mentioned in approach 1).

Does that make sense?

>> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
>> index fd8722e69..99a20263d 100644
>> --- a/lib/librte_mempool/rte_mempool.h
>> +++ b/lib/librte_mempool/rte_mempool.h
>> @@ -267,6 +267,7 @@ struct rte_mempool {
>>  #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
>>  #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
>>  #define MEMPOOL_F_POOL_CONTIG    0x0040 /**< Detect physcially contiguous objs */
>> +#define MEMPOOL_F_POOL_BLK_SZ_ALIGNED 0x0080 /**< Align buffer address to block size*/
>>  
>>  /**
>>   * @internal When debug is enabled, store some statistics.
> Same comment than for patch 3: the explanation should really be clarified.
> It's a hw specific limitation, which won't be obvious for the people that
> will read that code, so we must document it as clear as possible.
>
I won't see this as HW limitation. As mentioned in [1/4], even application
can request for block alignment, right?

But I agree that I will reword comment.

Thanks.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 3/4] mempool: introduce block size align flag
  2017-07-05  7:35     ` santosh
@ 2017-07-10 13:15       ` Olivier Matz
  2017-07-10 16:22         ` santosh
  0 siblings, 1 reply; 116+ messages in thread
From: Olivier Matz @ 2017-07-10 13:15 UTC (permalink / raw)
  To: santosh; +Cc: dev, thomas, hemant.agrawal, jerin.jacob, bruce.richardson

On Wed, 5 Jul 2017 13:05:57 +0530, santosh <santosh.shukla@caviumnetworks.com> wrote:
> Hi Olivier,
> 
> On Monday 03 July 2017 10:07 PM, Olivier Matz wrote:
> 
> > On Wed, 21 Jun 2017 17:32:47 +0000, Santosh Shukla <santosh.shukla@caviumnetworks.com> wrote:  
> >> Some mempool hw like octeontx/fpa block, demands block size aligned
> >> buffer address.
> >>  
> > What is the meaning of block size aligned?  
> 
> block size is total_elem_sz.
> 
> > Does it mean that the address has to be a multiple of total_elt_size?  
> 
> yes.
> 
> > Is this constraint on the virtual address only?
> >  
> both.

You mean virtual address and physical address must be a multiple of
total_elt_size? How is it possible?

For instance, I have this on my dpdk instance:
Segment 0: phys:0x52c00000, len:4194304, virt:0x7fed26000000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: phys:0x53400000, len:163577856, virt:0x7fed1c200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 2: phys:0x5d400000, len:20971520, virt:0x7fed1ac00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
...

I assume total_elt_size is 2176.

In segment 0, I need to use (virt = 0x7fed26000000 + 1536) to be
multiple of 2176. But the corresponding physical address (0x52c00000 + 1536)
is not multiple of 2176. 


Please clarify if only the virtual address has to be aligned.


> 
> >> Introducing an MEMPOOL_F_POOL_BLK_SZ_ALIGNED flag.
> >> If this flag is set:
> >> 1) adjust 'off' value to block size aligned value.
> >> 2) Allocate one additional buffer. This buffer is used to make sure that
> >> requested 'n' buffers get correctly populated to mempool.
> >> Example:
> >> 	elem_sz = 2432 // total element size.
> >> 	n = 2111 // requested number of buffer.
> >> 	off = 2304 // new buf_offset value after step 1)
> >> 	vaddr = 0x0 // actual start address of pool
> >> 	pool_len = 5133952 // total pool length i.e.. (elem_sz * n)
> >>
> >> Since 'off' is a non-zero value so below condition would fail for the
> >> block size align case.
> >>
> >> (((vaddr + off) + (elem_sz * n)) <= (vaddr + pool_len))
> >>
> >> Which is incorrect behavior. Additional buffer will solve this
> >> problem and correctly populate 'n' buffer to mempool for the aligned
> >> mode.  
> > Sorry, but the example is not very clear.
> >  
> which part?
> 
> I'll try to reword.
> 
> The problem statement is:
> - We want start of buffer address aligned to block_sz aka total_elt_sz.
> 
> Proposed solution in this patch:
> - Let's say that we get 'x' size of memory chunk from memzone.
> - Ideally we start using buffer at address 0 to...(x-block_sz).
> - Not necessarily first buffer address i.e. 0 is aligned to block_sz.
> - So we derive offset value for block_sz alignment purpose i.e..'off' . 
> - That 'off' makes sure that first va/pa address of buffer is blk_sz aligned.
> - Calculating 'off' may end up sacrificing first buffer of pool. So total
> number of buffer in pool is n-1, Which is incorrect behavior, Thats why
> we add 1 addition buffer. We request memzone to allocate (n+1 * total_elt_sz) pool
> area when F_BLK_SZ_ALIGNED flag is set.
> 
> >> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> >> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> >> ---
> >>  lib/librte_mempool/rte_mempool.c | 19 ++++++++++++++++---
> >>  lib/librte_mempool/rte_mempool.h |  1 +
> >>  2 files changed, 17 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> >> index 7dec2f51d..2010857f0 100644
> >> --- a/lib/librte_mempool/rte_mempool.c
> >> +++ b/lib/librte_mempool/rte_mempool.c
> >> @@ -350,7 +350,7 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
> >>  {
> >>  	unsigned total_elt_sz;
> >>  	unsigned i = 0;
> >> -	size_t off;
> >> +	size_t off, delta;
> >>  	struct rte_mempool_memhdr *memhdr;
> >>  	int ret;
> >>  
> >> @@ -387,7 +387,15 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
> >>  	memhdr->free_cb = free_cb;
> >>  	memhdr->opaque = opaque;
> >>  
> >> -	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
> >> +	if (mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED) {
> >> +		delta = (uintptr_t)vaddr % total_elt_sz;
> >> +		off = total_elt_sz - delta;
> >> +		/* Validate alignment */
> >> +		if (((uintptr_t)vaddr + off) % total_elt_sz) {
> >> +			RTE_LOG(ERR, MEMPOOL, "vaddr(%p) not aligned to total_elt_sz(%u)\n", (vaddr + off), total_elt_sz);
> >> +			return -EINVAL;
> >> +		}
> >> +	} else if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
> >>  		off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
> >>  	else
> >>  		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;  
> > What is the purpose of this test? Can it fail?  
> 
> Purpose is to sanity check blk_sz alignment. No it won;t fail.
> I thought better to keep sanity check but if you see no value
> then will remove in v2?

yes please


> > Not sure having the delta variable is helpful. However, adding a
> > small comment like this could help:
> >
> > 	/* align object start address to a multiple of total_elt_sz */
> > 	off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
> > 	
> > About style, please don't mix brackets and no-bracket blocks in the
> > same if/elseif/else.  
> 
> ok, in v2. 
> 
> >> @@ -555,8 +563,13 @@ rte_mempool_populate_default(struct rte_mempool *mp)
> >>  	}
> >>  
> >>  	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
> >> +
> >>  	for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) {
> >> -		size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift);
> >> +		if (mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED)
> >> +			size = rte_mempool_xmem_size(n + 1, total_elt_sz,
> >> +							pg_shift);
> >> +		else
> >> +			size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift);
> >>  
> >>  		ret = snprintf(mz_name, sizeof(mz_name),
> >>  			RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id);  
> >
> > One issue I see here is that this new flag breaks the function
> > rte_mempool_xmem_size(), which calculates the maximum amount of memory
> > required to store a given number of objects.
> >
> > It also probably breaks rte_mempool_xmem_usage().
> >
> > I don't have any good solution for now. A possibility is to change
> > the behavior of these functions for everyone, meaning that we will
> > always reserve more memory that really required. If this is done on
> > every memory chunk (struct rte_mempool_memhdr), it can eat a lot
> > of memory.
> >
> > Another approach would be to change the API of this function to
> > pass the capability flags, or the mempool pointer... but there is
> > a problem because these functions are usually called before the
> > mempool is instanciated.
> >  
> Per my description on [1/4]. If we agree to call
> _ops_get_capability() at very beginning i.e.. at _populate_default()
> then 'mp->flag' has capability flag. and We could add one more argument
> in _xmem_size( , flag)/_xmem_usage(, flag). 
> - xmem_size / xmem_usage() to check for that capability bit in 'flag'. 
> - if set then increase 'elt_num' by num.
> 
> That way your approach 2) make sense to me and it will very well fit
> in design. Won't waste memory like you mentioned in approach 1).
> 
> Does that make sense?

The use case of rte_mempool_xmem_size()/rte_mempool_xmem_usage()
is to determine how much memory is needed to instanciate a mempool:

  sz = rte_mempool_xmem_size(...);
  ptr = allocate(sz);
  paddr_table = get_phys_map(ptr);
  mp = rte_mempool_xmem_create(..., ptr, ..., paddr_table, ...);

If we want to transform the code to use another mempool_ops, it is
not possible:

  mp = rte_mempool_create_empty();
  rte_mempool_set_ops_byname(mp, "my-handler");
  sz = rte_mempool_xmem_size(...);   /* <<< the mp pointer is not passed */
  ptr = allocate(sz);
  paddr_table = get_phys_map(ptr);
  mp = rte_mempool_xmem_create(..., ptr, ..., paddr_table, ...);

So, yes, this approach would work but it needs to change the API.
I think it is possible to keep a compat with previous versions.

 
> >> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> >> index fd8722e69..99a20263d 100644
> >> --- a/lib/librte_mempool/rte_mempool.h
> >> +++ b/lib/librte_mempool/rte_mempool.h
> >> @@ -267,6 +267,7 @@ struct rte_mempool {
> >>  #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
> >>  #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
> >>  #define MEMPOOL_F_POOL_CONTIG    0x0040 /**< Detect physcially contiguous objs */
> >> +#define MEMPOOL_F_POOL_BLK_SZ_ALIGNED 0x0080 /**< Align buffer address to block size*/
> >>  
> >>  /**
> >>   * @internal When debug is enabled, store some statistics.  
> > Same comment than for patch 3: the explanation should really be clarified.
> > It's a hw specific limitation, which won't be obvious for the people that
> > will read that code, so we must document it as clear as possible.
> >  
> I won't see this as HW limitation. As mentioned in [1/4], even application
> can request for block alignment, right?

What would be the reason for an application would request this block aligment?
The total size of the element is usually not known by the application,
because the mempool adds its header and footer.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 1/4] mempool: get the external mempool capability
  2017-07-05  6:41     ` santosh
@ 2017-07-10 13:55       ` Olivier Matz
  2017-07-10 16:09         ` santosh
  0 siblings, 1 reply; 116+ messages in thread
From: Olivier Matz @ 2017-07-10 13:55 UTC (permalink / raw)
  To: santosh; +Cc: dev, thomas, hemant.agrawal, jerin.jacob, bruce.richardson

On Wed, 5 Jul 2017 12:11:52 +0530, santosh <santosh.shukla@caviumnetworks.com> wrote:
> Hi Olivier,
> 
> On Monday 03 July 2017 10:07 PM, Olivier Matz wrote:
> 
> > Hi Santosh,
> >
> > On Wed, 21 Jun 2017 17:32:45 +0000, Santosh Shukla <santosh.shukla@caviumnetworks.com> wrote:  
> >> Allow external mempool to advertise its capability.
> >> A handler been introduced called rte_mempool_ops_get_hw_cap.
> >> - Upon ->get_hw_cap call, mempool driver will advertise
> >> capability by returning flag.
> >> - Common layer updates flag value in 'mp->flags'.
> >>
> >> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> >> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>  
> > I guess you've already seen the compilation issue when shared libs
> > are enabled:
> > http://dpdk.org/dev/patchwork/patch/25603
> >  
> Yes, Will fix in v2.
> 
> >  
> >> ---
> >>  lib/librte_mempool/rte_mempool.c           |  5 +++++
> >>  lib/librte_mempool/rte_mempool.h           | 20 ++++++++++++++++++++
> >>  lib/librte_mempool/rte_mempool_ops.c       | 14 ++++++++++++++
> >>  lib/librte_mempool/rte_mempool_version.map |  7 +++++++
> >>  4 files changed, 46 insertions(+)
> >>
> >> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> >> index f65310f60..045baef45 100644
> >> --- a/lib/librte_mempool/rte_mempool.c
> >> +++ b/lib/librte_mempool/rte_mempool.c
> >> @@ -527,6 +527,11 @@ rte_mempool_populate_default(struct rte_mempool *mp)
> >>  	if (mp->nb_mem_chunks != 0)
> >>  		return -EEXIST;
> >>  
> >> +	/* Get external mempool capability */
> >> +	ret = rte_mempool_ops_get_hw_cap(mp);  
> > "hw" can be removed since some handlers are software (the other occurences
> > of hw should be removed too)
> >
> > "capabilities" is clearer than "cap"
> >
> > So I suggest rte_mempool_ops_get_capabilities() instead
> > With this name, the comment above becomes overkill...  
> 
> ok. Will take care in v2.
> 
> >> +	if (ret != -ENOENT)  
> > -ENOTSUP looks more appropriate (like in ethdev)
> >  
> imo: -ENOENT tell that driver has no new entry for capability flag(mp->flag).
> But no strong opinion for -ENOTSUP.
> 
> >> +		mp->flags |= ret;  
> > I'm wondering if these capability flags should be mixed with
> > other mempool flags.
> >
> > We can maybe remove this code above and directly call
> > rte_mempool_ops_get_capabilities() when we need to get them.  
> 
> 0) Treating this capability flag different vs existing RTE_MEMPOLL_F would
> result to adding new flag entry in struct rte_mempool { .drv_flag} for example.
> 1) That new flag entry will break ABI.
> 2) In-fact application can benefit this capability flag by explicitly setting
> in pool create api (e.g: rte_mempool_create_empty (, , , , , _F_POOL_CONGIG | F_BLK_SZ_ALIGNED)).
> 
> Those flag use-case not limited till driver scope, application too can benefit.
> 
> 3) Also provided that we have space in RTE_MEMPOOL_F_XX area, so adding couple of
> more bit won't impact design or effect pool creation sequence.
> 
> 4) By calling _ops_get_capability() at _populate_default() area would address issues pointed by
> you at patch [3/4]. Will explain details on ' how' in respective patch [3/4].
> 
> 5) Above all, Intent is to make sure that common layer managing capability flag 
> on behalf of driver or application.


I don't see any use case where an application could request
a block size alignment.

The problem of adding flags that are accessible to the user
is the complexity it adds to the API. If every driver comes
with its own flags, I'm affraid the generic code will soon become
unmaintainable. Especially, the dependencies between the flags
will have to be handled somewhere.

But, ok, let's do it.



> >> +
> >>  	if (rte_xen_dom0_supported()) {
> >>  		pg_sz = RTE_PGSIZE_2M;
> >>  		pg_shift = rte_bsf32(pg_sz);
> >> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> >> index a65f1a79d..c3cdc77e4 100644
> >> --- a/lib/librte_mempool/rte_mempool.h
> >> +++ b/lib/librte_mempool/rte_mempool.h
> >> @@ -390,6 +390,12 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
> >>   */
> >>  typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
> >>  
> >> +/**
> >> + * Get the mempool hw capability.
> >> + */
> >> +typedef int (*rte_mempool_get_hw_cap_t)(struct rte_mempool *mp);
> >> +
> >> +  
> > If possible, use "const struct rte_mempool *mp"
> >
> > Since flags are unsigned, I would also prefer a function returning an
> > int (0 on success, negative on error) and writing to an unsigned pointer
> > provided by the user.
> >  
> confused? mp->flag is int not unsigned. and We're returning
> -ENOENT/-ENOTSUP at error and positive value in-case driver supports capability.

Returing an int that is either an error or a flag mask prevents
from using the last flag 0x80000000 because it is also the sign bit.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 1/4] mempool: get the external mempool capability
  2017-07-10 13:55       ` Olivier Matz
@ 2017-07-10 16:09         ` santosh
  0 siblings, 0 replies; 116+ messages in thread
From: santosh @ 2017-07-10 16:09 UTC (permalink / raw)
  To: Olivier Matz; +Cc: dev, thomas, hemant.agrawal, jerin.jacob, bruce.richardson

On Monday 10 July 2017 07:25 PM, Olivier Matz wrote:

> On Wed, 5 Jul 2017 12:11:52 +0530, santosh <santosh.shukla@caviumnetworks.com> wrote:
>> Hi Olivier,
>>
>> On Monday 03 July 2017 10:07 PM, Olivier Matz wrote:
>>
>>> Hi Santosh,
>>>
>>> On Wed, 21 Jun 2017 17:32:45 +0000, Santosh Shukla <santosh.shukla@caviumnetworks.com> wrote:  
>>>> Allow external mempool to advertise its capability.
>>>> A handler been introduced called rte_mempool_ops_get_hw_cap.
>>>> - Upon ->get_hw_cap call, mempool driver will advertise
>>>> capability by returning flag.
>>>> - Common layer updates flag value in 'mp->flags'.
>>>>
>>>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>>>> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>  
>>> I guess you've already seen the compilation issue when shared libs
>>> are enabled:
>>> http://dpdk.org/dev/patchwork/patch/25603
>>>  
>> Yes, Will fix in v2.
>>
>>>  
>>>> ---
>>>>  lib/librte_mempool/rte_mempool.c           |  5 +++++
>>>>  lib/librte_mempool/rte_mempool.h           | 20 ++++++++++++++++++++
>>>>  lib/librte_mempool/rte_mempool_ops.c       | 14 ++++++++++++++
>>>>  lib/librte_mempool/rte_mempool_version.map |  7 +++++++
>>>>  4 files changed, 46 insertions(+)
>>>>
>>>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
>>>> index f65310f60..045baef45 100644
>>>> --- a/lib/librte_mempool/rte_mempool.c
>>>> +++ b/lib/librte_mempool/rte_mempool.c
>>>> @@ -527,6 +527,11 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>>>>  	if (mp->nb_mem_chunks != 0)
>>>>  		return -EEXIST;
>>>>  
>>>> +	/* Get external mempool capability */
>>>> +	ret = rte_mempool_ops_get_hw_cap(mp);  
>>> "hw" can be removed since some handlers are software (the other occurences
>>> of hw should be removed too)
>>>
>>> "capabilities" is clearer than "cap"
>>>
>>> So I suggest rte_mempool_ops_get_capabilities() instead
>>> With this name, the comment above becomes overkill...  
>> ok. Will take care in v2.
>>
>>>> +	if (ret != -ENOENT)  
>>> -ENOTSUP looks more appropriate (like in ethdev)
>>>  
>> imo: -ENOENT tell that driver has no new entry for capability flag(mp->flag).
>> But no strong opinion for -ENOTSUP.
>>
>>>> +		mp->flags |= ret;  
>>> I'm wondering if these capability flags should be mixed with
>>> other mempool flags.
>>>
>>> We can maybe remove this code above and directly call
>>> rte_mempool_ops_get_capabilities() when we need to get them.  
>> 0) Treating this capability flag different vs existing RTE_MEMPOLL_F would
>> result to adding new flag entry in struct rte_mempool { .drv_flag} for example.
>> 1) That new flag entry will break ABI.
>> 2) In-fact application can benefit this capability flag by explicitly setting
>> in pool create api (e.g: rte_mempool_create_empty (, , , , , _F_POOL_CONGIG | F_BLK_SZ_ALIGNED)).
>>
>> Those flag use-case not limited till driver scope, application too can benefit.
>>
>> 3) Also provided that we have space in RTE_MEMPOOL_F_XX area, so adding couple of
>> more bit won't impact design or effect pool creation sequence.
>>
>> 4) By calling _ops_get_capability() at _populate_default() area would address issues pointed by
>> you at patch [3/4]. Will explain details on ' how' in respective patch [3/4].
>>
>> 5) Above all, Intent is to make sure that common layer managing capability flag 
>> on behalf of driver or application.
>
> I don't see any use case where an application could request
> a block size alignment.
>
> The problem of adding flags that are accessible to the user
> is the complexity it adds to the API. If every driver comes
> with its own flags, I'm affraid the generic code will soon become
> unmaintainable. Especially, the dependencies between the flags
> will have to be handled somewhere.
>
> But, ok, let's do it.
>
>
>
>>>> +
>>>>  	if (rte_xen_dom0_supported()) {
>>>>  		pg_sz = RTE_PGSIZE_2M;
>>>>  		pg_shift = rte_bsf32(pg_sz);
>>>> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
>>>> index a65f1a79d..c3cdc77e4 100644
>>>> --- a/lib/librte_mempool/rte_mempool.h
>>>> +++ b/lib/librte_mempool/rte_mempool.h
>>>> @@ -390,6 +390,12 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
>>>>   */
>>>>  typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
>>>>  
>>>> +/**
>>>> + * Get the mempool hw capability.
>>>> + */
>>>> +typedef int (*rte_mempool_get_hw_cap_t)(struct rte_mempool *mp);
>>>> +
>>>> +  
>>> If possible, use "const struct rte_mempool *mp"
>>>
>>> Since flags are unsigned, I would also prefer a function returning an
>>> int (0 on success, negative on error) and writing to an unsigned pointer
>>> provided by the user.
>>>  
>> confused? mp->flag is int not unsigned. and We're returning
>> -ENOENT/-ENOTSUP at error and positive value in-case driver supports capability.
> Returing an int that is either an error or a flag mask prevents
> from using the last flag 0x80000000 because it is also the sign bit.
>
Ok. Will address in v2.

BTW: mp->flag is int and in case of updating a flag to a value like
0x80000000 will be a problem.. so do you want me to change
mp->flag data type from int to unsigned int and send out deprecation notice
for same?

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH 3/4] mempool: introduce block size align flag
  2017-07-10 13:15       ` Olivier Matz
@ 2017-07-10 16:22         ` santosh
  0 siblings, 0 replies; 116+ messages in thread
From: santosh @ 2017-07-10 16:22 UTC (permalink / raw)
  To: Olivier Matz; +Cc: dev, thomas, hemant.agrawal, jerin.jacob, bruce.richardson

On Monday 10 July 2017 06:45 PM, Olivier Matz wrote:

> On Wed, 5 Jul 2017 13:05:57 +0530, santosh <santosh.shukla@caviumnetworks.com> wrote:
>> Hi Olivier,
>>
>> On Monday 03 July 2017 10:07 PM, Olivier Matz wrote:
>>
>>> On Wed, 21 Jun 2017 17:32:47 +0000, Santosh Shukla <santosh.shukla@caviumnetworks.com> wrote:  
>>>> Some mempool hw like octeontx/fpa block, demands block size aligned
>>>> buffer address.
>>>>  
>>> What is the meaning of block size aligned?  
>> block size is total_elem_sz.
>>
>>> Does it mean that the address has to be a multiple of total_elt_size?  
>> yes.
>>
>>> Is this constraint on the virtual address only?
>>>  
>> both.
> You mean virtual address and physical address must be a multiple of
> total_elt_size? How is it possible?
>
> For instance, I have this on my dpdk instance:
> Segment 0: phys:0x52c00000, len:4194304, virt:0x7fed26000000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
> Segment 1: phys:0x53400000, len:163577856, virt:0x7fed1c200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
> Segment 2: phys:0x5d400000, len:20971520, virt:0x7fed1ac00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
> ...
>
> I assume total_elt_size is 2176.
>
> In segment 0, I need to use (virt = 0x7fed26000000 + 1536) to be
> multiple of 2176. But the corresponding physical address (0x52c00000 + 1536)
> is not multiple of 2176. 
>
>
> Please clarify if only the virtual address has to be aligned.

Yes. Its a virtual address.

>
>>>> Introducing an MEMPOOL_F_POOL_BLK_SZ_ALIGNED flag.
>>>> If this flag is set:
>>>> 1) adjust 'off' value to block size aligned value.
>>>> 2) Allocate one additional buffer. This buffer is used to make sure that
>>>> requested 'n' buffers get correctly populated to mempool.
>>>> Example:
>>>> 	elem_sz = 2432 // total element size.
>>>> 	n = 2111 // requested number of buffer.
>>>> 	off = 2304 // new buf_offset value after step 1)
>>>> 	vaddr = 0x0 // actual start address of pool
>>>> 	pool_len = 5133952 // total pool length i.e.. (elem_sz * n)
>>>>
>>>> Since 'off' is a non-zero value so below condition would fail for the
>>>> block size align case.
>>>>
>>>> (((vaddr + off) + (elem_sz * n)) <= (vaddr + pool_len))
>>>>
>>>> Which is incorrect behavior. Additional buffer will solve this
>>>> problem and correctly populate 'n' buffer to mempool for the aligned
>>>> mode.  
>>> Sorry, but the example is not very clear.
>>>  
>> which part?
>>
>> I'll try to reword.
>>
>> The problem statement is:
>> - We want start of buffer address aligned to block_sz aka total_elt_sz.
>>
>> Proposed solution in this patch:
>> - Let's say that we get 'x' size of memory chunk from memzone.
>> - Ideally we start using buffer at address 0 to...(x-block_sz).
>> - Not necessarily first buffer address i.e. 0 is aligned to block_sz.
>> - So we derive offset value for block_sz alignment purpose i.e..'off' . 
>> - That 'off' makes sure that first va/pa address of buffer is blk_sz aligned.
>> - Calculating 'off' may end up sacrificing first buffer of pool. So total
>> number of buffer in pool is n-1, Which is incorrect behavior, Thats why
>> we add 1 addition buffer. We request memzone to allocate (n+1 * total_elt_sz) pool
>> area when F_BLK_SZ_ALIGNED flag is set.
>>
>>>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>>>> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>>>> ---
>>>>  lib/librte_mempool/rte_mempool.c | 19 ++++++++++++++++---
>>>>  lib/librte_mempool/rte_mempool.h |  1 +
>>>>  2 files changed, 17 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
>>>> index 7dec2f51d..2010857f0 100644
>>>> --- a/lib/librte_mempool/rte_mempool.c
>>>> +++ b/lib/librte_mempool/rte_mempool.c
>>>> @@ -350,7 +350,7 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
>>>>  {
>>>>  	unsigned total_elt_sz;
>>>>  	unsigned i = 0;
>>>> -	size_t off;
>>>> +	size_t off, delta;
>>>>  	struct rte_mempool_memhdr *memhdr;
>>>>  	int ret;
>>>>  
>>>> @@ -387,7 +387,15 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
>>>>  	memhdr->free_cb = free_cb;
>>>>  	memhdr->opaque = opaque;
>>>>  
>>>> -	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
>>>> +	if (mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED) {
>>>> +		delta = (uintptr_t)vaddr % total_elt_sz;
>>>> +		off = total_elt_sz - delta;
>>>> +		/* Validate alignment */
>>>> +		if (((uintptr_t)vaddr + off) % total_elt_sz) {
>>>> +			RTE_LOG(ERR, MEMPOOL, "vaddr(%p) not aligned to total_elt_sz(%u)\n", (vaddr + off), total_elt_sz);
>>>> +			return -EINVAL;
>>>> +		}
>>>> +	} else if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
>>>>  		off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
>>>>  	else
>>>>  		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;  
>>> What is the purpose of this test? Can it fail?  
>> Purpose is to sanity check blk_sz alignment. No it won;t fail.
>> I thought better to keep sanity check but if you see no value
>> then will remove in v2?
> yes please
>
>
>>> Not sure having the delta variable is helpful. However, adding a
>>> small comment like this could help:
>>>
>>> 	/* align object start address to a multiple of total_elt_sz */
>>> 	off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
>>> 	
>>> About style, please don't mix brackets and no-bracket blocks in the
>>> same if/elseif/else.  
>> ok, in v2. 
>>
>>>> @@ -555,8 +563,13 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>>>>  	}
>>>>  
>>>>  	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
>>>> +
>>>>  	for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) {
>>>> -		size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift);
>>>> +		if (mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED)
>>>> +			size = rte_mempool_xmem_size(n + 1, total_elt_sz,
>>>> +							pg_shift);
>>>> +		else
>>>> +			size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift);
>>>>  
>>>>  		ret = snprintf(mz_name, sizeof(mz_name),
>>>>  			RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id);  
>>> One issue I see here is that this new flag breaks the function
>>> rte_mempool_xmem_size(), which calculates the maximum amount of memory
>>> required to store a given number of objects.
>>>
>>> It also probably breaks rte_mempool_xmem_usage().
>>>
>>> I don't have any good solution for now. A possibility is to change
>>> the behavior of these functions for everyone, meaning that we will
>>> always reserve more memory that really required. If this is done on
>>> every memory chunk (struct rte_mempool_memhdr), it can eat a lot
>>> of memory.
>>>
>>> Another approach would be to change the API of this function to
>>> pass the capability flags, or the mempool pointer... but there is
>>> a problem because these functions are usually called before the
>>> mempool is instanciated.
>>>  
>> Per my description on [1/4]. If we agree to call
>> _ops_get_capability() at very beginning i.e.. at _populate_default()
>> then 'mp->flag' has capability flag. and We could add one more argument
>> in _xmem_size( , flag)/_xmem_usage(, flag). 
>> - xmem_size / xmem_usage() to check for that capability bit in 'flag'. 
>> - if set then increase 'elt_num' by num.
>>
>> That way your approach 2) make sense to me and it will very well fit
>> in design. Won't waste memory like you mentioned in approach 1).
>>
>> Does that make sense?
> The use case of rte_mempool_xmem_size()/rte_mempool_xmem_usage()
> is to determine how much memory is needed to instanciate a mempool:
>
>   sz = rte_mempool_xmem_size(...);
>   ptr = allocate(sz);
>   paddr_table = get_phys_map(ptr);
>   mp = rte_mempool_xmem_create(..., ptr, ..., paddr_table, ...);
>
> If we want to transform the code to use another mempool_ops, it is
> not possible:
>
>   mp = rte_mempool_create_empty();
>   rte_mempool_set_ops_byname(mp, "my-handler");
>   sz = rte_mempool_xmem_size(...);   /* <<< the mp pointer is not passed */
>   ptr = allocate(sz);
>   paddr_table = get_phys_map(ptr);
>   mp = rte_mempool_xmem_create(..., ptr, ..., paddr_table, ...);
>
> So, yes, this approach would work but it needs to change the API.
> I think it is possible to keep a compat with previous versions.
>
>  

Ok. I'm planning to send out deprecation notice for xmem_size/usage api.
Does that make sense to you?

>>>> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
>>>> index fd8722e69..99a20263d 100644
>>>> --- a/lib/librte_mempool/rte_mempool.h
>>>> +++ b/lib/librte_mempool/rte_mempool.h
>>>> @@ -267,6 +267,7 @@ struct rte_mempool {
>>>>  #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
>>>>  #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
>>>>  #define MEMPOOL_F_POOL_CONTIG    0x0040 /**< Detect physcially contiguous objs */
>>>> +#define MEMPOOL_F_POOL_BLK_SZ_ALIGNED 0x0080 /**< Align buffer address to block size*/
>>>>  
>>>>  /**
>>>>   * @internal When debug is enabled, store some statistics.  
>>> Same comment than for patch 3: the explanation should really be clarified.
>>> It's a hw specific limitation, which won't be obvious for the people that
>>> will read that code, so we must document it as clear as possible.
>>>  
>> I won't see this as HW limitation. As mentioned in [1/4], even application
>> can request for block alignment, right?
> What would be the reason for an application would request this block aligment?
> The total size of the element is usually not known by the application,
> because the mempool adds its header and footer.
>
There were patches for custom alignment in past [1]. So its not new
initiative. 
Application don't have to know the internal block_size (total_elem_sz), but
My point is - Application can very much request for start address aligned
to block_size.
Besides that, We have enough space in MEMPOOL_F_ area so keeping alignment_flag
is not a problem.

Thanks.
[1] http://dpdk.org/dev/patchwork/patch/23885/

[1] http://dpdk.org/dev/patchwork/patch/23885/

^ permalink raw reply	[flat|nested] 116+ messages in thread

* [PATCH v2 0/6] Infrastructure to support octeontx HW mempool manager
  2017-06-21 17:32 [PATCH 0/4] Infrastructure to support octeontx HW mempool manager Santosh Shukla
                   ` (3 preceding siblings ...)
  2017-06-21 17:32 ` [PATCH 4/4] mempool: update range info to pool Santosh Shukla
@ 2017-07-13  9:32 ` Santosh Shukla
  2017-07-13  9:32   ` [PATCH v2 1/6] mempool: fix flags data type Santosh Shukla
                     ` (7 more replies)
  4 siblings, 8 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-07-13  9:32 UTC (permalink / raw)
  To: thomas, dev, olivier.matz; +Cc: jerin.jacob, hemant.agrawal, Santosh Shukla

v2:
(Note: v2 work is based on deprecation notice [1], It's for 17.11)

In order to support octeontx HW mempool manager, the common mempool layer must
meet below condition.
- Object start address should be block size (total elem size) aligned.
- Object must have the physically contiguous address within the pool.

And right now mempool doesn't support both.

Patchset adds infrastrucure to support both condition in a _generic_ way.
Proposed solution won't effect existing mempool drivers or its functionality.

Summary:
Introducing capability flag. Now mempool drivers can advertise their
capabilities to common mempool layer(at the pool creation time).
Handlers are introduced in order to support capability flag.

Flags:
* MEMPOOL_F_CAPA_PHYS_CONTIG - If flag is set then Detect whether the object
has the physically contiguous address with in a hugepage.

* MEMPOOL_F_POOL_BLK_SZ_ALIGNED - If flag is set then make sure that object
addresses are block size aligned.

API:
Two handles are introduced:
* rte_mempool_ops_get_capability - advertise mempool manager capability.
* rte_mempool_ops_update_range - Update start and end address range to
HW mempool manager.

v2 --> v1 :
* [01/06] Per deprecation notice [1], Changed rte_mempool 'flag'
  data type from int to unsigned int and removed flag param
  from _xmem_size/usage api.
* [02/06] Incorporated review feedback from v1 [2] (Suggested by Olivier)
* [03/06] Renamed flag to MEMPOOL_F_CAPA_PHYS_CONTIG
  and comment reworded. (Suggested by Olivier per v1 [3])
* [04/06] added new mempool arg in xmem_size/usage. (Suggested by Olivier)
* [05/06] patch description changed.
        - Removed elseif brakcet mix
        - removed sanity check for alignment
        - removed extra var delta
        - Removed __rte_unused from xmem_usage/size and added _BLK_SZ_ALIGN check.
        (Suggeted by Olivier per v1[4])
* [06/06] Added RTE_FUNC_PTR_OR_RET in rte_mempool_ops_update_ops.

Checkpatch status:
* WARNING: line over 80 characters
Noticed for debug messages.

Work history:
Refer [5].

Thanks.

[1] deprecation notice: http://dpdk.org/dev/patchwork/patch/26872/
[2] v1: http://dpdk.org/dev/patchwork/patch/25603/
[3] v1: http://dpdk.org/dev/patchwork/patch/25604/
[4] v1: http://dpdk.org/dev/patchwork/patch/25605/
[5] v1: http://dev.dpdk.narkive.com/Qcu55Lgz/dpdk-dev-patch-0-4-infrastructure-to-support-octeontx-hw-mempool-manager


Santosh Shukla (6):
  mempool: fix flags data type
  mempool: get the mempool capability
  mempool: detect physical contiguous object in pool
  mempool: add mempool arg in xmem size and usage
  mempool: introduce block size align flag
  mempool: update range info to pool

 drivers/net/xenvirt/rte_mempool_gntalloc.c |  5 +-
 lib/librte_mempool/rte_mempool.c           | 42 ++++++++++++++---
 lib/librte_mempool/rte_mempool.h           | 75 ++++++++++++++++++++++--------
 lib/librte_mempool/rte_mempool_ops.c       | 26 +++++++++++
 lib/librte_mempool/rte_mempool_version.map |  8 ++++
 test/test/test_mempool.c                   | 22 ++++-----
 test/test/test_mempool_perf.c              |  4 +-
 7 files changed, 140 insertions(+), 42 deletions(-)

-- 
2.13.0

^ permalink raw reply	[flat|nested] 116+ messages in thread

* [PATCH v2 1/6] mempool: fix flags data type
  2017-07-13  9:32 ` [PATCH v2 0/6] Infrastructure to support octeontx HW mempool manager Santosh Shukla
@ 2017-07-13  9:32   ` Santosh Shukla
  2017-07-13  9:32   ` [PATCH v2 2/6] mempool: get the mempool capability Santosh Shukla
                     ` (6 subsequent siblings)
  7 siblings, 0 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-07-13  9:32 UTC (permalink / raw)
  To: thomas, dev, olivier.matz; +Cc: jerin.jacob, hemant.agrawal, Santosh Shukla

mp->flags is int and mempool API updates unsigned int
value in 'flags', so fix the 'flags' data type.

Patch also does mp->flags cleanup like:
* Remove redundant 'flags' API description from
  - __rte_mempool_generic_put
  - __rte_mempool_generic_get

* Remove unused 'flags' param from
  - rte_mempool_generic_put
  - rte_mempool_generic_get

* Fix mempool var data types int mempool.c
  - mz_flags is int, Change it to unsigned int.

Fixes: af75078fec ("first public release")
Fixes: 454a0a7009 ("mempool: use cache in single producer or consumer mode")
Fixes: d6f78df6fe ("mempool: use bit flags for multi consumers and producers")
Fixes: d1d914ebbc ("mempool: allocate in several memory chunks by default")

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
---
- Changes are based on per deprecation notice [1]
[1] http://dpdk.org/dev/patchwork/patch/26872/

 lib/librte_mempool/rte_mempool.c |  4 ++--
 lib/librte_mempool/rte_mempool.h | 23 +++++------------------
 test/test/test_mempool.c         | 18 +++++++++---------
 test/test/test_mempool_perf.c    |  4 ++--
 4 files changed, 18 insertions(+), 31 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 6fc3c9c7c..237665c65 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -515,7 +515,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 int
 rte_mempool_populate_default(struct rte_mempool *mp)
 {
-	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
+	unsigned int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	const struct rte_memzone *mz;
 	size_t size, total_elt_sz, align, pg_sz, pg_shift;
@@ -742,7 +742,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 	struct rte_tailq_entry *te = NULL;
 	const struct rte_memzone *mz = NULL;
 	size_t mempool_size;
-	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
+	unsigned int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
 	struct rte_mempool_objsz objsz;
 	unsigned lcore_id;
 	int ret;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 76b5b3b15..bd7be2319 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -226,7 +226,7 @@ struct rte_mempool {
 	};
 	void *pool_config;               /**< optional args for ops alloc. */
 	const struct rte_memzone *mz;    /**< Memzone where pool is alloc'd. */
-	int flags;                       /**< Flags of the mempool. */
+	unsigned int flags;              /**< Flags of the mempool. */
 	int socket_id;                   /**< Socket id passed at create. */
 	uint32_t size;                   /**< Max size of the mempool. */
 	uint32_t cache_size;
@@ -1034,9 +1034,6 @@ rte_mempool_default_cache(struct rte_mempool *mp, unsigned lcore_id)
  *   positive.
  * @param cache
  *   A pointer to a mempool cache structure. May be NULL if not needed.
- * @param flags
- *   The flags used for the mempool creation.
- *   Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers.
  */
 static __rte_always_inline void
 __mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
@@ -1096,14 +1093,10 @@ __mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
  *   The number of objects to add in the mempool from the obj_table.
  * @param cache
  *   A pointer to a mempool cache structure. May be NULL if not needed.
- * @param flags
- *   The flags used for the mempool creation.
- *   Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers.
  */
 static __rte_always_inline void
 rte_mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
-			unsigned n, struct rte_mempool_cache *cache,
-			__rte_unused int flags)
+			unsigned n, struct rte_mempool_cache *cache)
 {
 	__mempool_check_cookies(mp, obj_table, n, 0);
 	__mempool_generic_put(mp, obj_table, n, cache);
@@ -1129,7 +1122,7 @@ rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 {
 	struct rte_mempool_cache *cache;
 	cache = rte_mempool_default_cache(mp, rte_lcore_id());
-	rte_mempool_generic_put(mp, obj_table, n, cache, mp->flags);
+	rte_mempool_generic_put(mp, obj_table, n, cache);
 }
 
 /**
@@ -1160,9 +1153,6 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
  *   The number of objects to get, must be strictly positive.
  * @param cache
  *   A pointer to a mempool cache structure. May be NULL if not needed.
- * @param flags
- *   The flags used for the mempool creation.
- *   Single-consumer (MEMPOOL_F_SC_GET flag) or multi-consumers.
  * @return
  *   - >=0: Success; number of objects supplied.
  *   - <0: Error; code of ring dequeue function.
@@ -1241,16 +1231,13 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table,
  *   The number of objects to get from mempool to obj_table.
  * @param cache
  *   A pointer to a mempool cache structure. May be NULL if not needed.
- * @param flags
- *   The flags used for the mempool creation.
- *   Single-consumer (MEMPOOL_F_SC_GET flag) or multi-consumers.
  * @return
  *   - 0: Success; objects taken.
  *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
  */
 static __rte_always_inline int
 rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n,
-			struct rte_mempool_cache *cache, __rte_unused int flags)
+			struct rte_mempool_cache *cache)
 {
 	int ret;
 	ret = __mempool_generic_get(mp, obj_table, n, cache);
@@ -1286,7 +1273,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
 {
 	struct rte_mempool_cache *cache;
 	cache = rte_mempool_default_cache(mp, rte_lcore_id());
-	return rte_mempool_generic_get(mp, obj_table, n, cache, mp->flags);
+	return rte_mempool_generic_get(mp, obj_table, n, cache);
 }
 
 /**
diff --git a/test/test/test_mempool.c b/test/test/test_mempool.c
index 0a4423954..47dc3ac5f 100644
--- a/test/test/test_mempool.c
+++ b/test/test/test_mempool.c
@@ -129,7 +129,7 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 	rte_mempool_dump(stdout, mp);
 
 	printf("get an object\n");
-	if (rte_mempool_generic_get(mp, &obj, 1, cache, 0) < 0)
+	if (rte_mempool_generic_get(mp, &obj, 1, cache) < 0)
 		GOTO_ERR(ret, out);
 	rte_mempool_dump(stdout, mp);
 
@@ -152,21 +152,21 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 #endif
 
 	printf("put the object back\n");
-	rte_mempool_generic_put(mp, &obj, 1, cache, 0);
+	rte_mempool_generic_put(mp, &obj, 1, cache);
 	rte_mempool_dump(stdout, mp);
 
 	printf("get 2 objects\n");
-	if (rte_mempool_generic_get(mp, &obj, 1, cache, 0) < 0)
+	if (rte_mempool_generic_get(mp, &obj, 1, cache) < 0)
 		GOTO_ERR(ret, out);
-	if (rte_mempool_generic_get(mp, &obj2, 1, cache, 0) < 0) {
-		rte_mempool_generic_put(mp, &obj, 1, cache, 0);
+	if (rte_mempool_generic_get(mp, &obj2, 1, cache) < 0) {
+		rte_mempool_generic_put(mp, &obj, 1, cache);
 		GOTO_ERR(ret, out);
 	}
 	rte_mempool_dump(stdout, mp);
 
 	printf("put the objects back\n");
-	rte_mempool_generic_put(mp, &obj, 1, cache, 0);
-	rte_mempool_generic_put(mp, &obj2, 1, cache, 0);
+	rte_mempool_generic_put(mp, &obj, 1, cache);
+	rte_mempool_generic_put(mp, &obj2, 1, cache);
 	rte_mempool_dump(stdout, mp);
 
 	/*
@@ -178,7 +178,7 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 		GOTO_ERR(ret, out);
 
 	for (i = 0; i < MEMPOOL_SIZE; i++) {
-		if (rte_mempool_generic_get(mp, &objtable[i], 1, cache, 0) < 0)
+		if (rte_mempool_generic_get(mp, &objtable[i], 1, cache) < 0)
 			break;
 	}
 
@@ -200,7 +200,7 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 				ret = -1;
 		}
 
-		rte_mempool_generic_put(mp, &objtable[i], 1, cache, 0);
+		rte_mempool_generic_put(mp, &objtable[i], 1, cache);
 	}
 
 	free(objtable);
diff --git a/test/test/test_mempool_perf.c b/test/test/test_mempool_perf.c
index 07b28c066..3b8f7de7c 100644
--- a/test/test/test_mempool_perf.c
+++ b/test/test/test_mempool_perf.c
@@ -186,7 +186,7 @@ per_lcore_mempool_test(void *arg)
 				ret = rte_mempool_generic_get(mp,
 							      &obj_table[idx],
 							      n_get_bulk,
-							      cache, 0);
+							      cache);
 				if (unlikely(ret < 0)) {
 					rte_mempool_dump(stdout, mp);
 					/* in this case, objects are lost... */
@@ -200,7 +200,7 @@ per_lcore_mempool_test(void *arg)
 			while (idx < n_keep) {
 				rte_mempool_generic_put(mp, &obj_table[idx],
 							n_put_bulk,
-							cache, 0);
+							cache);
 				idx += n_put_bulk;
 			}
 		}
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v2 2/6] mempool: get the mempool capability
  2017-07-13  9:32 ` [PATCH v2 0/6] Infrastructure to support octeontx HW mempool manager Santosh Shukla
  2017-07-13  9:32   ` [PATCH v2 1/6] mempool: fix flags data type Santosh Shukla
@ 2017-07-13  9:32   ` Santosh Shukla
  2017-07-13  9:32   ` [PATCH v2 3/6] mempool: detect physical contiguous object in pool Santosh Shukla
                     ` (5 subsequent siblings)
  7 siblings, 0 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-07-13  9:32 UTC (permalink / raw)
  To: thomas, dev, olivier.matz; +Cc: jerin.jacob, hemant.agrawal, Santosh Shukla

Allow mempool to advertise its capability.
A handler been introduced called rte_mempool_ops_get_capabilities.
- Upon ->get_capabilities call, mempool driver will advertise
capability by updating to 'mp->flags'.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v1 -- v2:
- Added RTE_FUNC_PTR_OR_ERR_RET
- _get_capabilities :: returs 0 : Success and <0 :Error
- _get_capabilities :: driver updates mp->flags with their capability value.
- _get_capabilites :: Added approriate comment
- Fixed _version.map :: replaced DPDK_17.05 with DPDK_16.07
Refer [1].
[1] http://dpdk.org/dev/patchwork/patch/25603/

 lib/librte_mempool/rte_mempool.c           |  5 +++++
 lib/librte_mempool/rte_mempool.h           | 20 ++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c       | 13 +++++++++++++
 lib/librte_mempool/rte_mempool_version.map |  7 +++++++
 4 files changed, 45 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 237665c65..34619aafd 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -527,6 +527,11 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	if (mp->nb_mem_chunks != 0)
 		return -EEXIST;
 
+	/* Get mempool capability */
+	ret = rte_mempool_ops_get_capabilities(mp);
+	if (ret)
+		RTE_LOG(DEBUG, MEMPOOL, "get_capability not supported for %s\n", mp->name);
+
 	if (rte_xen_dom0_supported()) {
 		pg_sz = RTE_PGSIZE_2M;
 		pg_shift = rte_bsf32(pg_sz);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index bd7be2319..0fa571c72 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -389,6 +389,12 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
  */
 typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
 
+/**
+ * Get the mempool capability.
+ */
+typedef int (*rte_mempool_get_capabilities_t)(struct rte_mempool *mp);
+
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -397,6 +403,7 @@ struct rte_mempool_ops {
 	rte_mempool_enqueue_t enqueue;   /**< Enqueue an object. */
 	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
 	rte_mempool_get_count get_count; /**< Get qty of available objs. */
+	rte_mempool_get_capabilities_t get_capabilities; /**< Get capability */
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -508,6 +515,19 @@ rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
 unsigned
 rte_mempool_ops_get_count(const struct rte_mempool *mp);
 
+
+/**
+ * @internal wrapper for mempool_ops get_capabilities callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   - 0: Success; Capability updated to mp->flags
+ *   - <0: Error; code of capability function.
+ */
+int
+rte_mempool_ops_get_capabilities(struct rte_mempool *mp);
+
 /**
  * @internal wrapper for mempool_ops free callback.
  *
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 5f24de250..84b2f8151 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -85,6 +85,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->enqueue = h->enqueue;
 	ops->dequeue = h->dequeue;
 	ops->get_count = h->get_count;
+	ops->get_capabilities = h->get_capabilities;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -123,6 +124,18 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp)
 	return ops->get_count(mp);
 }
 
+/* wrapper to get external mempool capability. */
+int
+rte_mempool_ops_get_capabilities(struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->get_capabilities, -ENOTSUP);
+	return ops->get_capabilities(mp);
+}
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index f9c079447..392388bef 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -41,3 +41,10 @@ DPDK_16.07 {
 	rte_mempool_set_ops_byname;
 
 } DPDK_2.0;
+
+DPDK_17.08 {
+	global:
+
+	rte_mempool_ops_get_capabilities;
+
+} DPDK_16.07;
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v2 3/6] mempool: detect physical contiguous object in pool
  2017-07-13  9:32 ` [PATCH v2 0/6] Infrastructure to support octeontx HW mempool manager Santosh Shukla
  2017-07-13  9:32   ` [PATCH v2 1/6] mempool: fix flags data type Santosh Shukla
  2017-07-13  9:32   ` [PATCH v2 2/6] mempool: get the mempool capability Santosh Shukla
@ 2017-07-13  9:32   ` Santosh Shukla
  2017-07-13  9:32   ` [PATCH v2 4/6] mempool: add mempool arg in xmem size and usage Santosh Shukla
                     ` (4 subsequent siblings)
  7 siblings, 0 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-07-13  9:32 UTC (permalink / raw)
  To: thomas, dev, olivier.matz; +Cc: jerin.jacob, hemant.agrawal, Santosh Shukla

The memory area containing all the objects must be physically
contiguous.
Introducing MEMPOOL_F_CAPA_PHYS_CONTIG flag for such use-case.

The flag useful to detect whether pool area has sufficient space
to fit all objects. If not then return -ENOSPC.
This way, we make sure that all object within a pool is contiguous.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v1 -- v2:
- Renamed flag to MEMPOOL_F_CAPA_PHYS_CONTIG
- Comment reworded.
Refer [1].
[1] http://dpdk.org/dev/patchwork/patch/25604/

 lib/librte_mempool/rte_mempool.c | 8 ++++++++
 lib/librte_mempool/rte_mempool.h | 1 +
 2 files changed, 9 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 34619aafd..65a98c046 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -368,6 +368,14 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 
+	/* Detect pool area has sufficient space for elements */
+	if (mp->flags & MEMPOOL_F_CAPA_PHYS_CONTIG) {
+		if (len < total_elt_sz * mp->size) {
+			RTE_LOG(ERR, MEMPOOL, "pool area %" PRIx64 " not enough\n", len);
+			return -ENOSPC;
+		}
+	}
+
 	memhdr = rte_zmalloc("MEMPOOL_MEMHDR", sizeof(*memhdr), 0);
 	if (memhdr == NULL)
 		return -ENOMEM;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 0fa571c72..ca5634eaf 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -265,6 +265,7 @@ struct rte_mempool {
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
+#define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040 /**< Detect physcially contiguous objs */
 
 /**
  * @internal When debug is enabled, store some statistics.
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v2 4/6] mempool: add mempool arg in xmem size and usage
  2017-07-13  9:32 ` [PATCH v2 0/6] Infrastructure to support octeontx HW mempool manager Santosh Shukla
                     ` (2 preceding siblings ...)
  2017-07-13  9:32   ` [PATCH v2 3/6] mempool: detect physical contiguous object in pool Santosh Shukla
@ 2017-07-13  9:32   ` Santosh Shukla
  2017-07-13  9:32   ` [PATCH v2 5/6] mempool: introduce block size align flag Santosh Shukla
                     ` (3 subsequent siblings)
  7 siblings, 0 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-07-13  9:32 UTC (permalink / raw)
  To: thomas, dev, olivier.matz; +Cc: jerin.jacob, hemant.agrawal, Santosh Shukla

xmem_size and xmem_usage need to know the status of mp->flag.
Following patch will make use of that.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
---
v1 -- v2:
- added new mempool param in xmem_size/usage, Per deprecation notice [1]
 and discussion based on thread [2]
[1] http://dpdk.org/dev/patchwork/patch/26872/
[2] http://dpdk.org/dev/patchwork/patch/25605/

 drivers/net/xenvirt/rte_mempool_gntalloc.c |  5 +++--
 lib/librte_mempool/rte_mempool.c           | 10 ++++++----
 lib/librte_mempool/rte_mempool.h           |  8 ++++++--
 test/test/test_mempool.c                   |  4 ++--
 4 files changed, 17 insertions(+), 10 deletions(-)

diff --git a/drivers/net/xenvirt/rte_mempool_gntalloc.c b/drivers/net/xenvirt/rte_mempool_gntalloc.c
index 73e82f808..ee0bda459 100644
--- a/drivers/net/xenvirt/rte_mempool_gntalloc.c
+++ b/drivers/net/xenvirt/rte_mempool_gntalloc.c
@@ -114,7 +114,7 @@ _create_mempool(const char *name, unsigned elt_num, unsigned elt_size,
 	pg_shift = rte_bsf32(pg_sz);
 
 	rte_mempool_calc_obj_size(elt_size, flags, &objsz);
-	sz = rte_mempool_xmem_size(elt_num, objsz.total_size, pg_shift);
+	sz = rte_mempool_xmem_size(elt_num, objsz.total_size, pg_shift, NULL);
 	pg_num = sz >> pg_shift;
 
 	pa_arr = calloc(pg_num, sizeof(pa_arr[0]));
@@ -162,7 +162,8 @@ _create_mempool(const char *name, unsigned elt_num, unsigned elt_size,
 	 * Check that allocated size is big enough to hold elt_num
 	 * objects and a calcualte how many bytes are actually required.
 	 */
-	usz = rte_mempool_xmem_usage(va, elt_num, objsz.total_size, pa_arr, pg_num, pg_shift);
+	usz = rte_mempool_xmem_usage(va, elt_num, objsz.total_size, pa_arr,
+				     pg_num, pg_shift, NULL);
 	if (usz < 0) {
 		mp = NULL;
 		i = pg_num;
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 65a98c046..a6975aeda 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -238,7 +238,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  * Calculate maximum amount of memory required to store given number of objects.
  */
 size_t
-rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift)
+rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
+		      __rte_unused const struct rte_mempool *mp)
 {
 	size_t obj_per_page, pg_num, pg_sz;
 
@@ -264,13 +265,14 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift)
 ssize_t
 rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const phys_addr_t paddr[], uint32_t pg_num,
-	uint32_t pg_shift)
+	uint32_t pg_shift, __rte_unused const struct rte_mempool *mp)
 {
 	uint32_t elt_cnt = 0;
 	phys_addr_t start, end;
 	uint32_t paddr_idx;
 	size_t pg_sz = (size_t)1 << pg_shift;
 
+
 	/* if paddr is NULL, assume contiguous memory */
 	if (paddr == NULL) {
 		start = 0;
@@ -556,7 +558,7 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 	for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) {
-		size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift);
+		size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift, mp);
 
 		ret = snprintf(mz_name, sizeof(mz_name),
 			RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id);
@@ -613,7 +615,7 @@ get_anon_size(const struct rte_mempool *mp)
 	pg_sz = getpagesize();
 	pg_shift = rte_bsf32(pg_sz);
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
-	size = rte_mempool_xmem_size(mp->size, total_elt_sz, pg_shift);
+	size = rte_mempool_xmem_size(mp->size, total_elt_sz, pg_shift, mp);
 
 	return size;
 }
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index ca5634eaf..a4bfdb56e 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -1497,11 +1497,13 @@ uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  *   by rte_mempool_calc_obj_size().
  * @param pg_shift
  *   LOG2 of the physical pages size. If set to 0, ignore page boundaries.
+ * @param mp
+ *  A pointer to the mempool structure.
  * @return
  *   Required memory size aligned at page boundary.
  */
 size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
-	uint32_t pg_shift);
+	uint32_t pg_shift, const struct rte_mempool *mp);
 
 /**
  * Get the size of memory required to store mempool elements.
@@ -1524,6 +1526,8 @@ size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
  *   Number of elements in the paddr array.
  * @param pg_shift
  *   LOG2 of the physical pages size.
+ * @param mp
+ *  A pointer to the mempool structure.
  * @return
  *   On success, the number of bytes needed to store given number of
  *   objects, aligned to the given page size. If the provided memory
@@ -1532,7 +1536,7 @@ size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
  */
 ssize_t rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const phys_addr_t paddr[], uint32_t pg_num,
-	uint32_t pg_shift);
+	uint32_t pg_shift, const struct rte_mempool *mp);
 
 /**
  * Walk list of all memory pools
diff --git a/test/test/test_mempool.c b/test/test/test_mempool.c
index 47dc3ac5f..1eb81081c 100644
--- a/test/test/test_mempool.c
+++ b/test/test/test_mempool.c
@@ -485,10 +485,10 @@ test_mempool_xmem_misc(void)
 
 	elt_num = MAX_KEEP;
 	total_size = rte_mempool_calc_obj_size(MEMPOOL_ELT_SIZE, 0, NULL);
-	sz = rte_mempool_xmem_size(elt_num, total_size, MEMPOOL_PG_SHIFT_MAX);
+	sz = rte_mempool_xmem_size(elt_num, total_size, MEMPOOL_PG_SHIFT_MAX, NULL);
 
 	usz = rte_mempool_xmem_usage(NULL, elt_num, total_size, 0, 1,
-		MEMPOOL_PG_SHIFT_MAX);
+		MEMPOOL_PG_SHIFT_MAX, NULL);
 
 	if (sz != (size_t)usz)  {
 		printf("failure @ %s: rte_mempool_xmem_usage(%u, %u) "
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v2 5/6] mempool: introduce block size align flag
  2017-07-13  9:32 ` [PATCH v2 0/6] Infrastructure to support octeontx HW mempool manager Santosh Shukla
                     ` (3 preceding siblings ...)
  2017-07-13  9:32   ` [PATCH v2 4/6] mempool: add mempool arg in xmem size and usage Santosh Shukla
@ 2017-07-13  9:32   ` Santosh Shukla
  2017-07-13  9:32   ` [PATCH v2 6/6] mempool: update range info to pool Santosh Shukla
                     ` (2 subsequent siblings)
  7 siblings, 0 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-07-13  9:32 UTC (permalink / raw)
  To: thomas, dev, olivier.matz; +Cc: jerin.jacob, hemant.agrawal, Santosh Shukla

Some mempool hw like octeontx/fpa block, demands block size
(/total_elem_sz) aligned object start address.

Introducing an MEMPOOL_F_POOL_BLK_SZ_ALIGNED flag.
If this flag is set:
- Align object start address to a multiple of total_elt_sz.
- Allocate one additional object. Additional object is needed to make
  sure that requested 'n' object gets correctly populated. Example:

- Let's say that we get 'x' size of memory chunk from memzone.
- And application has requested 'n' object from mempool.
- Ideally, we start using objects at start address 0 to...(x-block_sz)
  for n obj.
- Not necessarily first object address i.e. 0 is aligned to block_sz.
- So we derive 'offset' value for block_sz alignment purpose i.e..'off'.
- That 'off' makes sure that start address of object is blk_sz
  aligned.
- Calculating 'off' may end up sacrificing first block_sz area of
  memzone area x. So total number of the object which can fit in the
  pool area is n-1, Which is incorrect behavior.

Therefore we request one additional object (/block_sz area) from memzone
when F_BLK_SZ_ALIGNED flag is set.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v1 -- v2:
- patch description changed.
- Removed elseif brakcet mix
- removed sanity check for alignment
- removed extra var delta
- Removed __rte_unused from xmem_usage/size and added _BLK_SZ_ALIGN check.
Refer v1 review comment [1].
[1] http://dpdk.org/dev/patchwork/patch/25605/

 lib/librte_mempool/rte_mempool.c | 16 +++++++++++++---
 lib/librte_mempool/rte_mempool.h |  1 +
 2 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index a6975aeda..4ae2bde53 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -239,10 +239,14 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  */
 size_t
 rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
-		      __rte_unused const struct rte_mempool *mp)
+		      const struct rte_mempool *mp)
 {
 	size_t obj_per_page, pg_num, pg_sz;
 
+	if (mp && mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED)
+		/* alignment need one additional object */
+		elt_num += 1;
+
 	if (total_elt_sz == 0)
 		return 0;
 
@@ -265,13 +269,16 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
 ssize_t
 rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const phys_addr_t paddr[], uint32_t pg_num,
-	uint32_t pg_shift, __rte_unused const struct rte_mempool *mp)
+	uint32_t pg_shift, const struct rte_mempool *mp)
 {
 	uint32_t elt_cnt = 0;
 	phys_addr_t start, end;
 	uint32_t paddr_idx;
 	size_t pg_sz = (size_t)1 << pg_shift;
 
+	if (mp && mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED)
+		/* alignment need one additional object */
+		elt_num += 1;
 
 	/* if paddr is NULL, assume contiguous memory */
 	if (paddr == NULL) {
@@ -389,7 +396,10 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	memhdr->free_cb = free_cb;
 	memhdr->opaque = opaque;
 
-	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+	if (mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED)
+		/* align object start address to a multiple of total_elt_sz */
+		off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
+	else if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
 		off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
 	else
 		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index a4bfdb56e..d7c2416f4 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -266,6 +266,7 @@ struct rte_mempool {
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
 #define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040 /**< Detect physcially contiguous objs */
+#define MEMPOOL_F_POOL_BLK_SZ_ALIGNED 0x0080 /**< Align obj start address to total elem size */
 
 /**
  * @internal When debug is enabled, store some statistics.
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v2 6/6] mempool: update range info to pool
  2017-07-13  9:32 ` [PATCH v2 0/6] Infrastructure to support octeontx HW mempool manager Santosh Shukla
                     ` (4 preceding siblings ...)
  2017-07-13  9:32   ` [PATCH v2 5/6] mempool: introduce block size align flag Santosh Shukla
@ 2017-07-13  9:32   ` Santosh Shukla
  2017-07-18  6:07   ` [PATCH v2 0/6] Infrastructure to support octeontx HW mempool manager santosh
  2017-07-20 13:47   ` [PATCH v3 " Santosh Shukla
  7 siblings, 0 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-07-13  9:32 UTC (permalink / raw)
  To: thomas, dev, olivier.matz; +Cc: jerin.jacob, hemant.agrawal, Santosh Shukla

HW pool manager e.g. Octeontx SoC demands s/w to program start and end
address of pool. Currently, there is no such handle in external mempool.
Introducing rte_mempool_update_range handle which will let HW(pool
manager)
know when common layer selects hugepage:
For each hugepage - update its start/end address to HW pool manager.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v1 -- v2:
- Added RTE_FUNC_PTR_OR_RET

 lib/librte_mempool/rte_mempool.c           |  3 +++
 lib/librte_mempool/rte_mempool.h           | 22 ++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c       | 13 +++++++++++++
 lib/librte_mempool/rte_mempool_version.map |  1 +
 4 files changed, 39 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 4ae2bde53..ea32b65a9 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -363,6 +363,9 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	struct rte_mempool_memhdr *memhdr;
 	int ret;
 
+	/* update range info to mempool */
+	rte_mempool_ops_update_range(mp, vaddr, paddr, len);
+
 	/* create the internal ring if not already done */
 	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
 		ret = rte_mempool_ops_alloc(mp);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index d7c2416f4..b59a522cd 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -396,6 +396,11 @@ typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
  */
 typedef int (*rte_mempool_get_capabilities_t)(struct rte_mempool *mp);
 
+/**
+ * Update range info to mempool.
+ */
+typedef void (*rte_mempool_update_range_t)(const struct rte_mempool *mp,
+		char *vaddr, phys_addr_t paddr, size_t len);
 
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
@@ -406,6 +411,7 @@ struct rte_mempool_ops {
 	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
 	rte_mempool_get_count get_count; /**< Get qty of available objs. */
 	rte_mempool_get_capabilities_t get_capabilities; /**< Get capability */
+	rte_mempool_update_range_t update_range; /**< Update range to mempool */
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -531,6 +537,22 @@ int
 rte_mempool_ops_get_capabilities(struct rte_mempool *mp);
 
 /**
+ * @internal wrapper for mempool_ops update_range callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param vaddr
+ *   Pointer to the buffer virtual address
+ * @param paddr
+ *   Pointer to the buffer physical address
+ * @param len
+ *   Pool size
+ */
+void
+rte_mempool_ops_update_range(const struct rte_mempool *mp,
+				char *vaddr, phys_addr_t paddr, size_t len);
+
+/**
  * @internal wrapper for mempool_ops free callback.
  *
  * @param mp
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 84b2f8151..c9df351f3 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -86,6 +86,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->dequeue = h->dequeue;
 	ops->get_count = h->get_count;
 	ops->get_capabilities = h->get_capabilities;
+	ops->update_range = h->update_range;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -136,6 +137,18 @@ rte_mempool_ops_get_capabilities(struct rte_mempool *mp)
 	return ops->get_capabilities(mp);
 }
 
+/* wrapper to update range info to external mempool */
+void
+rte_mempool_ops_update_range(const struct rte_mempool *mp, char *vaddr,
+			     phys_addr_t paddr, size_t len)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+	RTE_FUNC_PTR_OR_RET(ops->update_range);
+	ops->update_range(mp, vaddr, paddr, len);
+}
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 392388bef..c0ed5a94f 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -46,5 +46,6 @@ DPDK_17.08 {
 	global:
 
 	rte_mempool_ops_get_capabilities;
+	rte_mempool_ops_update_range;
 
 } DPDK_16.07;
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* Re: [PATCH v2 0/6] Infrastructure to support octeontx HW mempool manager
  2017-07-13  9:32 ` [PATCH v2 0/6] Infrastructure to support octeontx HW mempool manager Santosh Shukla
                     ` (5 preceding siblings ...)
  2017-07-13  9:32   ` [PATCH v2 6/6] mempool: update range info to pool Santosh Shukla
@ 2017-07-18  6:07   ` santosh
  2017-07-20 13:47   ` [PATCH v3 " Santosh Shukla
  7 siblings, 0 replies; 116+ messages in thread
From: santosh @ 2017-07-18  6:07 UTC (permalink / raw)
  To: thomas, dev, olivier.matz; +Cc: jerin.jacob, hemant.agrawal

On Thursday 13 July 2017 03:02 PM, Santosh Shukla wrote:

> v2:
> (Note: v2 work is based on deprecation notice [1], It's for 17.11)
>
> In order to support octeontx HW mempool manager, the common mempool layer must
> meet below condition.
> - Object start address should be block size (total elem size) aligned.
> - Object must have the physically contiguous address within the pool.
>
> And right now mempool doesn't support both.
>
> Patchset adds infrastrucure to support both condition in a _generic_ way.
> Proposed solution won't effect existing mempool drivers or its functionality.
>
> Summary:
> Introducing capability flag. Now mempool drivers can advertise their
> capabilities to common mempool layer(at the pool creation time).
> Handlers are introduced in order to support capability flag.
>
> Flags:
> * MEMPOOL_F_CAPA_PHYS_CONTIG - If flag is set then Detect whether the object
> has the physically contiguous address with in a hugepage.
>
> * MEMPOOL_F_POOL_BLK_SZ_ALIGNED - If flag is set then make sure that object
> addresses are block size aligned.
>
> API:
> Two handles are introduced:
> * rte_mempool_ops_get_capability - advertise mempool manager capability.
> * rte_mempool_ops_update_range - Update start and end address range to
> HW mempool manager.
>
> v2 --> v1 :
> * [01/06] Per deprecation notice [1], Changed rte_mempool 'flag'
>   data type from int to unsigned int and removed flag param
>   from _xmem_size/usage api.
> * [02/06] Incorporated review feedback from v1 [2] (Suggested by Olivier)
> * [03/06] Renamed flag to MEMPOOL_F_CAPA_PHYS_CONTIG
>   and comment reworded. (Suggested by Olivier per v1 [3])
> * [04/06] added new mempool arg in xmem_size/usage. (Suggested by Olivier)
> * [05/06] patch description changed.
>         - Removed elseif brakcet mix
>         - removed sanity check for alignment
>         - removed extra var delta
>         - Removed __rte_unused from xmem_usage/size and added _BLK_SZ_ALIGN check.
>         (Suggeted by Olivier per v1[4])
> * [06/06] Added RTE_FUNC_PTR_OR_RET in rte_mempool_ops_update_ops.
>
> Checkpatch status:
> * WARNING: line over 80 characters
> Noticed for debug messages.
>
> Work history:
> Refer [5].
>
> Thanks.
>
> [1] deprecation notice: http://dpdk.org/dev/patchwork/patch/26872/
> [2] v1: http://dpdk.org/dev/patchwork/patch/25603/
> [3] v1: http://dpdk.org/dev/patchwork/patch/25604/
> [4] v1: http://dpdk.org/dev/patchwork/patch/25605/
> [5] v1: http://dev.dpdk.narkive.com/Qcu55Lgz/dpdk-dev-patch-0-4-infrastructure-to-support-octeontx-hw-mempool-manager
>
Ping?

^ permalink raw reply	[flat|nested] 116+ messages in thread

* [PATCH v3 0/6] Infrastructure to support octeontx HW mempool manager
  2017-07-13  9:32 ` [PATCH v2 0/6] Infrastructure to support octeontx HW mempool manager Santosh Shukla
                     ` (6 preceding siblings ...)
  2017-07-18  6:07   ` [PATCH v2 0/6] Infrastructure to support octeontx HW mempool manager santosh
@ 2017-07-20 13:47   ` Santosh Shukla
  2017-07-20 13:47     ` [PATCH v3 1/6] mempool: fix flags data type Santosh Shukla
                       ` (6 more replies)
  7 siblings, 7 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-07-20 13:47 UTC (permalink / raw)
  To: thomas, dev, olivier.matz; +Cc: jerin.jacob, hemant.agrawal, Santosh Shukla

v3:
(Note: v3 work is based on deprecation notice [1], It's for 17.11)
* Changed _version.map from 17.08 to 17.11.
* build fixes reported by stv_sys.
* Patchset rebased on upstream commit: da94a999. 

v2:

In order to support octeontx HW mempool manager, the common mempool layer must
meet below condition.
- Object start address should be block size (total elem size) aligned.
- Object must have the physically contiguous address within the pool.

And right now mempool doesn't support both.

Patchset adds infrastrucure to support both condition in a _generic_ way.
Proposed solution won't effect existing mempool drivers or its functionality.

Summary:
Introducing capability flag. Now mempool drivers can advertise their
capabilities to common mempool layer(at the pool creation time).
Handlers are introduced in order to support capability flag.

Flags:
* MEMPOOL_F_CAPA_PHYS_CONTIG - If flag is set then Detect whether the object
has the physically contiguous address with in a hugepage.

* MEMPOOL_F_POOL_BLK_SZ_ALIGNED - If flag is set then make sure that object
addresses are block size aligned.

API:
Two handles are introduced:
* rte_mempool_ops_get_capability - advertise mempool manager capability.
* rte_mempool_ops_update_range - Update start and end address range to
HW mempool manager.

v2 --> v1 :
* [01/06] Per deprecation notice [1], Changed rte_mempool 'flag'
  data type from int to unsigned int and removed flag param
  from _xmem_size/usage api.
* [02/06] Incorporated review feedback from v1 [2] (Suggested by Olivier)
* [03/06] Renamed flag to MEMPOOL_F_CAPA_PHYS_CONTIG
  and comment reworded. (Suggested by Olivier per v1 [3])
* [04/06] added new mempool arg in xmem_size/usage. (Suggested by Olivier)
* [05/06] patch description changed.
        - Removed elseif brakcet mix
        - removed sanity check for alignment
        - removed extra var delta
        - Removed __rte_unused from xmem_usage/size and added _BLK_SZ_ALIGN check.
        (Suggeted by Olivier per v1[4])
* [06/06] Added RTE_FUNC_PTR_OR_RET in rte_mempool_ops_update_ops.

Checkpatch status:
* WARNING: line over 80 characters
Noticed for debug messages.

Work history:
For v1, refer [5].

Thanks.

[1] deprecation notice v2: http://dpdk.org/dev/patchwork/patch/27079/
[2] v1: http://dpdk.org/dev/patchwork/patch/25603/
[3] v1: http://dpdk.org/dev/patchwork/patch/25604/
[4] v1: http://dpdk.org/dev/patchwork/patch/25605/
[5] v1: http://dev.dpdk.narkive.com/Qcu55Lgz/dpdk-dev-patch-0-4-infrastructure-to-support-octeontx-hw-mempool-manager

Santosh Shukla (6):
  mempool: fix flags data type
  mempool: get the mempool capability
  mempool: detect physical contiguous object in pool
  mempool: add mempool arg in xmem size and usage
  mempool: introduce block size align flag
  mempool: update range info to pool

 drivers/net/xenvirt/rte_mempool_gntalloc.c |  5 +-
 lib/librte_mempool/rte_mempool.c           | 42 ++++++++++++++---
 lib/librte_mempool/rte_mempool.h           | 75 ++++++++++++++++++++++--------
 lib/librte_mempool/rte_mempool_ops.c       | 27 +++++++++++
 lib/librte_mempool/rte_mempool_version.map |  8 ++++
 test/test/test_mempool.c                   | 22 ++++-----
 test/test/test_mempool_perf.c              |  4 +-
 7 files changed, 141 insertions(+), 42 deletions(-)

-- 
2.11.0

^ permalink raw reply	[flat|nested] 116+ messages in thread

* [PATCH v3 1/6] mempool: fix flags data type
  2017-07-20 13:47   ` [PATCH v3 " Santosh Shukla
@ 2017-07-20 13:47     ` Santosh Shukla
  2017-07-20 13:47     ` [PATCH v3 2/6] mempool: get the mempool capability Santosh Shukla
                       ` (5 subsequent siblings)
  6 siblings, 0 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-07-20 13:47 UTC (permalink / raw)
  To: thomas, dev, olivier.matz
  Cc: jerin.jacob, hemant.agrawal, Santosh Shukla, Wenfeng Liu,
	Lazaros Koromilas

mp->flags is int and mempool API updates unsigned int
value in 'flags', so fix the 'flags' data type.

Patch also does mp->flags cleanup like:
* Remove redundant 'flags' API description from
  - __rte_mempool_generic_put
  - __rte_mempool_generic_get

* Remove unused 'flags' param from
  - rte_mempool_generic_put
  - rte_mempool_generic_get

* Fix mempool var data types int mempool.c
  - mz_flags is int, Change it to unsigned int.

Fixes: af75078fec ("first public release")
Fixes: 454a0a7009 ("mempool: use cache in single producer or consumer mode")
Fixes: d6f78df6fe ("mempool: use bit flags for multi consumers and producers")
Fixes: d1d914ebbc ("mempool: allocate in several memory chunks by default")

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
---
Cc: Wenfeng Liu <liuwf@arraynetworks.com.cn>
Cc: Lazaros Koromilas <l@nofutznetworks.com>
Cc: Olivier Matz <olivier.matz@6wind.com>

v3:
- Changes are based on per deprecation notice [1]
[1] http://dpdk.org/dev/patchwork/patch/27079/

 lib/librte_mempool/rte_mempool.c |  4 ++--
 lib/librte_mempool/rte_mempool.h | 23 +++++------------------
 test/test/test_mempool.c         | 18 +++++++++---------
 test/test/test_mempool_perf.c    |  4 ++--
 4 files changed, 18 insertions(+), 31 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 6fc3c9c7c..237665c65 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -515,7 +515,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 int
 rte_mempool_populate_default(struct rte_mempool *mp)
 {
-	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
+	unsigned int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	const struct rte_memzone *mz;
 	size_t size, total_elt_sz, align, pg_sz, pg_shift;
@@ -742,7 +742,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 	struct rte_tailq_entry *te = NULL;
 	const struct rte_memzone *mz = NULL;
 	size_t mempool_size;
-	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
+	unsigned int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
 	struct rte_mempool_objsz objsz;
 	unsigned lcore_id;
 	int ret;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 76b5b3b15..bd7be2319 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -226,7 +226,7 @@ struct rte_mempool {
 	};
 	void *pool_config;               /**< optional args for ops alloc. */
 	const struct rte_memzone *mz;    /**< Memzone where pool is alloc'd. */
-	int flags;                       /**< Flags of the mempool. */
+	unsigned int flags;              /**< Flags of the mempool. */
 	int socket_id;                   /**< Socket id passed at create. */
 	uint32_t size;                   /**< Max size of the mempool. */
 	uint32_t cache_size;
@@ -1034,9 +1034,6 @@ rte_mempool_default_cache(struct rte_mempool *mp, unsigned lcore_id)
  *   positive.
  * @param cache
  *   A pointer to a mempool cache structure. May be NULL if not needed.
- * @param flags
- *   The flags used for the mempool creation.
- *   Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers.
  */
 static __rte_always_inline void
 __mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
@@ -1096,14 +1093,10 @@ __mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
  *   The number of objects to add in the mempool from the obj_table.
  * @param cache
  *   A pointer to a mempool cache structure. May be NULL if not needed.
- * @param flags
- *   The flags used for the mempool creation.
- *   Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers.
  */
 static __rte_always_inline void
 rte_mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
-			unsigned n, struct rte_mempool_cache *cache,
-			__rte_unused int flags)
+			unsigned n, struct rte_mempool_cache *cache)
 {
 	__mempool_check_cookies(mp, obj_table, n, 0);
 	__mempool_generic_put(mp, obj_table, n, cache);
@@ -1129,7 +1122,7 @@ rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 {
 	struct rte_mempool_cache *cache;
 	cache = rte_mempool_default_cache(mp, rte_lcore_id());
-	rte_mempool_generic_put(mp, obj_table, n, cache, mp->flags);
+	rte_mempool_generic_put(mp, obj_table, n, cache);
 }
 
 /**
@@ -1160,9 +1153,6 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
  *   The number of objects to get, must be strictly positive.
  * @param cache
  *   A pointer to a mempool cache structure. May be NULL if not needed.
- * @param flags
- *   The flags used for the mempool creation.
- *   Single-consumer (MEMPOOL_F_SC_GET flag) or multi-consumers.
  * @return
  *   - >=0: Success; number of objects supplied.
  *   - <0: Error; code of ring dequeue function.
@@ -1241,16 +1231,13 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table,
  *   The number of objects to get from mempool to obj_table.
  * @param cache
  *   A pointer to a mempool cache structure. May be NULL if not needed.
- * @param flags
- *   The flags used for the mempool creation.
- *   Single-consumer (MEMPOOL_F_SC_GET flag) or multi-consumers.
  * @return
  *   - 0: Success; objects taken.
  *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
  */
 static __rte_always_inline int
 rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n,
-			struct rte_mempool_cache *cache, __rte_unused int flags)
+			struct rte_mempool_cache *cache)
 {
 	int ret;
 	ret = __mempool_generic_get(mp, obj_table, n, cache);
@@ -1286,7 +1273,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
 {
 	struct rte_mempool_cache *cache;
 	cache = rte_mempool_default_cache(mp, rte_lcore_id());
-	return rte_mempool_generic_get(mp, obj_table, n, cache, mp->flags);
+	return rte_mempool_generic_get(mp, obj_table, n, cache);
 }
 
 /**
diff --git a/test/test/test_mempool.c b/test/test/test_mempool.c
index 0a4423954..47dc3ac5f 100644
--- a/test/test/test_mempool.c
+++ b/test/test/test_mempool.c
@@ -129,7 +129,7 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 	rte_mempool_dump(stdout, mp);
 
 	printf("get an object\n");
-	if (rte_mempool_generic_get(mp, &obj, 1, cache, 0) < 0)
+	if (rte_mempool_generic_get(mp, &obj, 1, cache) < 0)
 		GOTO_ERR(ret, out);
 	rte_mempool_dump(stdout, mp);
 
@@ -152,21 +152,21 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 #endif
 
 	printf("put the object back\n");
-	rte_mempool_generic_put(mp, &obj, 1, cache, 0);
+	rte_mempool_generic_put(mp, &obj, 1, cache);
 	rte_mempool_dump(stdout, mp);
 
 	printf("get 2 objects\n");
-	if (rte_mempool_generic_get(mp, &obj, 1, cache, 0) < 0)
+	if (rte_mempool_generic_get(mp, &obj, 1, cache) < 0)
 		GOTO_ERR(ret, out);
-	if (rte_mempool_generic_get(mp, &obj2, 1, cache, 0) < 0) {
-		rte_mempool_generic_put(mp, &obj, 1, cache, 0);
+	if (rte_mempool_generic_get(mp, &obj2, 1, cache) < 0) {
+		rte_mempool_generic_put(mp, &obj, 1, cache);
 		GOTO_ERR(ret, out);
 	}
 	rte_mempool_dump(stdout, mp);
 
 	printf("put the objects back\n");
-	rte_mempool_generic_put(mp, &obj, 1, cache, 0);
-	rte_mempool_generic_put(mp, &obj2, 1, cache, 0);
+	rte_mempool_generic_put(mp, &obj, 1, cache);
+	rte_mempool_generic_put(mp, &obj2, 1, cache);
 	rte_mempool_dump(stdout, mp);
 
 	/*
@@ -178,7 +178,7 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 		GOTO_ERR(ret, out);
 
 	for (i = 0; i < MEMPOOL_SIZE; i++) {
-		if (rte_mempool_generic_get(mp, &objtable[i], 1, cache, 0) < 0)
+		if (rte_mempool_generic_get(mp, &objtable[i], 1, cache) < 0)
 			break;
 	}
 
@@ -200,7 +200,7 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 				ret = -1;
 		}
 
-		rte_mempool_generic_put(mp, &objtable[i], 1, cache, 0);
+		rte_mempool_generic_put(mp, &objtable[i], 1, cache);
 	}
 
 	free(objtable);
diff --git a/test/test/test_mempool_perf.c b/test/test/test_mempool_perf.c
index 07b28c066..3b8f7de7c 100644
--- a/test/test/test_mempool_perf.c
+++ b/test/test/test_mempool_perf.c
@@ -186,7 +186,7 @@ per_lcore_mempool_test(void *arg)
 				ret = rte_mempool_generic_get(mp,
 							      &obj_table[idx],
 							      n_get_bulk,
-							      cache, 0);
+							      cache);
 				if (unlikely(ret < 0)) {
 					rte_mempool_dump(stdout, mp);
 					/* in this case, objects are lost... */
@@ -200,7 +200,7 @@ per_lcore_mempool_test(void *arg)
 			while (idx < n_keep) {
 				rte_mempool_generic_put(mp, &obj_table[idx],
 							n_put_bulk,
-							cache, 0);
+							cache);
 				idx += n_put_bulk;
 			}
 		}
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v3 2/6] mempool: get the mempool capability
  2017-07-20 13:47   ` [PATCH v3 " Santosh Shukla
  2017-07-20 13:47     ` [PATCH v3 1/6] mempool: fix flags data type Santosh Shukla
@ 2017-07-20 13:47     ` Santosh Shukla
  2017-07-20 13:47     ` [PATCH v3 3/6] mempool: detect physical contiguous object in pool Santosh Shukla
                       ` (4 subsequent siblings)
  6 siblings, 0 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-07-20 13:47 UTC (permalink / raw)
  To: thomas, dev, olivier.matz; +Cc: jerin.jacob, hemant.agrawal, Santosh Shukla

Allow mempool to advertise its capability.
A handler been introduced called rte_mempool_ops_get_capabilities.
- Upon ->get_capabilities call, mempool driver will advertise
capability by updating to 'mp->flags'.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v2 -- v3:
- Changed version from _17.08 to _17.11.

v1 -- v2:
- Added RTE_FUNC_PTR_OR_ERR_RET
- _get_capabilities :: returs 0 : Success and <0 :Error
- _get_capabilities :: driver updates mp->flags with their capability value.
- _get_capabilites :: Added approriate comment
- Fixed _version.map :: replaced DPDK_17.05 with DPDK_16.07
Refer [1].
[1] http://dpdk.org/dev/patchwork/patch/25603/

 lib/librte_mempool/rte_mempool.c           |  5 +++++
 lib/librte_mempool/rte_mempool.h           | 20 ++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c       | 14 ++++++++++++++
 lib/librte_mempool/rte_mempool_version.map |  7 +++++++
 4 files changed, 46 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 237665c65..34619aafd 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -527,6 +527,11 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	if (mp->nb_mem_chunks != 0)
 		return -EEXIST;
 
+	/* Get mempool capability */
+	ret = rte_mempool_ops_get_capabilities(mp);
+	if (ret)
+		RTE_LOG(DEBUG, MEMPOOL, "get_capability not supported for %s\n", mp->name);
+
 	if (rte_xen_dom0_supported()) {
 		pg_sz = RTE_PGSIZE_2M;
 		pg_shift = rte_bsf32(pg_sz);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index bd7be2319..0fa571c72 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -389,6 +389,12 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
  */
 typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
 
+/**
+ * Get the mempool capability.
+ */
+typedef int (*rte_mempool_get_capabilities_t)(struct rte_mempool *mp);
+
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -397,6 +403,7 @@ struct rte_mempool_ops {
 	rte_mempool_enqueue_t enqueue;   /**< Enqueue an object. */
 	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
 	rte_mempool_get_count get_count; /**< Get qty of available objs. */
+	rte_mempool_get_capabilities_t get_capabilities; /**< Get capability */
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -508,6 +515,19 @@ rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
 unsigned
 rte_mempool_ops_get_count(const struct rte_mempool *mp);
 
+
+/**
+ * @internal wrapper for mempool_ops get_capabilities callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   - 0: Success; Capability updated to mp->flags
+ *   - <0: Error; code of capability function.
+ */
+int
+rte_mempool_ops_get_capabilities(struct rte_mempool *mp);
+
 /**
  * @internal wrapper for mempool_ops free callback.
  *
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 5f24de250..31a73cc9a 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -37,6 +37,7 @@
 
 #include <rte_mempool.h>
 #include <rte_errno.h>
+#include <rte_dev.h>
 
 /* indirect jump table to support external memory pools. */
 struct rte_mempool_ops_table rte_mempool_ops_table = {
@@ -85,6 +86,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->enqueue = h->enqueue;
 	ops->dequeue = h->dequeue;
 	ops->get_count = h->get_count;
+	ops->get_capabilities = h->get_capabilities;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -123,6 +125,18 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp)
 	return ops->get_count(mp);
 }
 
+/* wrapper to get external mempool capability. */
+int
+rte_mempool_ops_get_capabilities(struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->get_capabilities, -ENOTSUP);
+	return ops->get_capabilities(mp);
+}
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index f9c079447..3c3471507 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -41,3 +41,10 @@ DPDK_16.07 {
 	rte_mempool_set_ops_byname;
 
 } DPDK_2.0;
+
+DPDK_17.11 {
+	global:
+
+	rte_mempool_ops_get_capabilities;
+
+} DPDK_16.07;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v3 3/6] mempool: detect physical contiguous object in pool
  2017-07-20 13:47   ` [PATCH v3 " Santosh Shukla
  2017-07-20 13:47     ` [PATCH v3 1/6] mempool: fix flags data type Santosh Shukla
  2017-07-20 13:47     ` [PATCH v3 2/6] mempool: get the mempool capability Santosh Shukla
@ 2017-07-20 13:47     ` Santosh Shukla
  2017-07-20 13:47     ` [PATCH v3 4/6] mempool: add mempool arg in xmem size and usage Santosh Shukla
                       ` (3 subsequent siblings)
  6 siblings, 0 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-07-20 13:47 UTC (permalink / raw)
  To: thomas, dev, olivier.matz; +Cc: jerin.jacob, hemant.agrawal, Santosh Shukla

The memory area containing all the objects must be physically
contiguous.
Introducing MEMPOOL_F_CAPA_PHYS_CONTIG flag for such use-case.

The flag useful to detect whether pool area has sufficient space
to fit all objects. If not then return -ENOSPC.
This way, we make sure that all object within a pool is contiguous.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v2 -- v3:
- type casted len to uint64_t (fix build warning).

v1 -- v2:
- Renamed flag to MEMPOOL_F_CAPA_PHYS_CONTIG
- Comment reworded.
Refer [1].
[1] http://dpdk.org/dev/patchwork/patch/25604/

 lib/librte_mempool/rte_mempool.c | 8 ++++++++
 lib/librte_mempool/rte_mempool.h | 1 +
 2 files changed, 9 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 34619aafd..958654f2f 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -368,6 +368,14 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 
+	/* Detect pool area has sufficient space for elements */
+	if (mp->flags & MEMPOOL_F_CAPA_PHYS_CONTIG) {
+		if (len < total_elt_sz * mp->size) {
+			RTE_LOG(ERR, MEMPOOL, "pool area %" PRIx64 " not enough\n", (uint64_t)len);
+			return -ENOSPC;
+		}
+	}
+
 	memhdr = rte_zmalloc("MEMPOOL_MEMHDR", sizeof(*memhdr), 0);
 	if (memhdr == NULL)
 		return -ENOMEM;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 0fa571c72..ca5634eaf 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -265,6 +265,7 @@ struct rte_mempool {
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
+#define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040 /**< Detect physcially contiguous objs */
 
 /**
  * @internal When debug is enabled, store some statistics.
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v3 4/6] mempool: add mempool arg in xmem size and usage
  2017-07-20 13:47   ` [PATCH v3 " Santosh Shukla
                       ` (2 preceding siblings ...)
  2017-07-20 13:47     ` [PATCH v3 3/6] mempool: detect physical contiguous object in pool Santosh Shukla
@ 2017-07-20 13:47     ` Santosh Shukla
  2017-07-20 13:47     ` [PATCH v3 5/6] mempool: introduce block size align flag Santosh Shukla
                       ` (2 subsequent siblings)
  6 siblings, 0 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-07-20 13:47 UTC (permalink / raw)
  To: thomas, dev, olivier.matz; +Cc: jerin.jacob, hemant.agrawal, Santosh Shukla

xmem_size and xmem_usage need to know the status of mp->flag.
Following patch will make use of that.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
---
v1 -- v2:
- added new mempool param in xmem_size/usage, Per deprecation notice [1]
 and discussion based on thread [2]
[1] http://dpdk.org/dev/patchwork/patch/26872/
[2] http://dpdk.org/dev/patchwork/patch/25605/

 drivers/net/xenvirt/rte_mempool_gntalloc.c |  5 +++--
 lib/librte_mempool/rte_mempool.c           | 10 ++++++----
 lib/librte_mempool/rte_mempool.h           |  8 ++++++--
 test/test/test_mempool.c                   |  4 ++--
 4 files changed, 17 insertions(+), 10 deletions(-)

diff --git a/drivers/net/xenvirt/rte_mempool_gntalloc.c b/drivers/net/xenvirt/rte_mempool_gntalloc.c
index 73e82f808..ee0bda459 100644
--- a/drivers/net/xenvirt/rte_mempool_gntalloc.c
+++ b/drivers/net/xenvirt/rte_mempool_gntalloc.c
@@ -114,7 +114,7 @@ _create_mempool(const char *name, unsigned elt_num, unsigned elt_size,
 	pg_shift = rte_bsf32(pg_sz);
 
 	rte_mempool_calc_obj_size(elt_size, flags, &objsz);
-	sz = rte_mempool_xmem_size(elt_num, objsz.total_size, pg_shift);
+	sz = rte_mempool_xmem_size(elt_num, objsz.total_size, pg_shift, NULL);
 	pg_num = sz >> pg_shift;
 
 	pa_arr = calloc(pg_num, sizeof(pa_arr[0]));
@@ -162,7 +162,8 @@ _create_mempool(const char *name, unsigned elt_num, unsigned elt_size,
 	 * Check that allocated size is big enough to hold elt_num
 	 * objects and a calcualte how many bytes are actually required.
 	 */
-	usz = rte_mempool_xmem_usage(va, elt_num, objsz.total_size, pa_arr, pg_num, pg_shift);
+	usz = rte_mempool_xmem_usage(va, elt_num, objsz.total_size, pa_arr,
+				     pg_num, pg_shift, NULL);
 	if (usz < 0) {
 		mp = NULL;
 		i = pg_num;
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 958654f2f..19e5e6ddf 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -238,7 +238,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  * Calculate maximum amount of memory required to store given number of objects.
  */
 size_t
-rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift)
+rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
+		      __rte_unused const struct rte_mempool *mp)
 {
 	size_t obj_per_page, pg_num, pg_sz;
 
@@ -264,13 +265,14 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift)
 ssize_t
 rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const phys_addr_t paddr[], uint32_t pg_num,
-	uint32_t pg_shift)
+	uint32_t pg_shift, __rte_unused const struct rte_mempool *mp)
 {
 	uint32_t elt_cnt = 0;
 	phys_addr_t start, end;
 	uint32_t paddr_idx;
 	size_t pg_sz = (size_t)1 << pg_shift;
 
+
 	/* if paddr is NULL, assume contiguous memory */
 	if (paddr == NULL) {
 		start = 0;
@@ -556,7 +558,7 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 	for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) {
-		size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift);
+		size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift, mp);
 
 		ret = snprintf(mz_name, sizeof(mz_name),
 			RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id);
@@ -613,7 +615,7 @@ get_anon_size(const struct rte_mempool *mp)
 	pg_sz = getpagesize();
 	pg_shift = rte_bsf32(pg_sz);
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
-	size = rte_mempool_xmem_size(mp->size, total_elt_sz, pg_shift);
+	size = rte_mempool_xmem_size(mp->size, total_elt_sz, pg_shift, mp);
 
 	return size;
 }
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index ca5634eaf..a4bfdb56e 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -1497,11 +1497,13 @@ uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  *   by rte_mempool_calc_obj_size().
  * @param pg_shift
  *   LOG2 of the physical pages size. If set to 0, ignore page boundaries.
+ * @param mp
+ *  A pointer to the mempool structure.
  * @return
  *   Required memory size aligned at page boundary.
  */
 size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
-	uint32_t pg_shift);
+	uint32_t pg_shift, const struct rte_mempool *mp);
 
 /**
  * Get the size of memory required to store mempool elements.
@@ -1524,6 +1526,8 @@ size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
  *   Number of elements in the paddr array.
  * @param pg_shift
  *   LOG2 of the physical pages size.
+ * @param mp
+ *  A pointer to the mempool structure.
  * @return
  *   On success, the number of bytes needed to store given number of
  *   objects, aligned to the given page size. If the provided memory
@@ -1532,7 +1536,7 @@ size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
  */
 ssize_t rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const phys_addr_t paddr[], uint32_t pg_num,
-	uint32_t pg_shift);
+	uint32_t pg_shift, const struct rte_mempool *mp);
 
 /**
  * Walk list of all memory pools
diff --git a/test/test/test_mempool.c b/test/test/test_mempool.c
index 47dc3ac5f..1eb81081c 100644
--- a/test/test/test_mempool.c
+++ b/test/test/test_mempool.c
@@ -485,10 +485,10 @@ test_mempool_xmem_misc(void)
 
 	elt_num = MAX_KEEP;
 	total_size = rte_mempool_calc_obj_size(MEMPOOL_ELT_SIZE, 0, NULL);
-	sz = rte_mempool_xmem_size(elt_num, total_size, MEMPOOL_PG_SHIFT_MAX);
+	sz = rte_mempool_xmem_size(elt_num, total_size, MEMPOOL_PG_SHIFT_MAX, NULL);
 
 	usz = rte_mempool_xmem_usage(NULL, elt_num, total_size, 0, 1,
-		MEMPOOL_PG_SHIFT_MAX);
+		MEMPOOL_PG_SHIFT_MAX, NULL);
 
 	if (sz != (size_t)usz)  {
 		printf("failure @ %s: rte_mempool_xmem_usage(%u, %u) "
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v3 5/6] mempool: introduce block size align flag
  2017-07-20 13:47   ` [PATCH v3 " Santosh Shukla
                       ` (3 preceding siblings ...)
  2017-07-20 13:47     ` [PATCH v3 4/6] mempool: add mempool arg in xmem size and usage Santosh Shukla
@ 2017-07-20 13:47     ` Santosh Shukla
  2017-07-20 13:47     ` [PATCH v3 6/6] mempool: update range info to pool Santosh Shukla
  2017-08-15  6:07     ` [PATCH v4 0/7] Infrastructure to support octeontx HW mempool manager Santosh Shukla
  6 siblings, 0 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-07-20 13:47 UTC (permalink / raw)
  To: thomas, dev, olivier.matz; +Cc: jerin.jacob, hemant.agrawal, Santosh Shukla

Some mempool hw like octeontx/fpa block, demands block size
(/total_elem_sz) aligned object start address.

Introducing an MEMPOOL_F_POOL_BLK_SZ_ALIGNED flag.
If this flag is set:
- Align object start address to a multiple of total_elt_sz.
- Allocate one additional object. Additional object is needed to make
  sure that requested 'n' object gets correctly populated. Example:

- Let's say that we get 'x' size of memory chunk from memzone.
- And application has requested 'n' object from mempool.
- Ideally, we start using objects at start address 0 to...(x-block_sz)
  for n obj.
- Not necessarily first object address i.e. 0 is aligned to block_sz.
- So we derive 'offset' value for block_sz alignment purpose i.e..'off'.
- That 'off' makes sure that start address of object is blk_sz
  aligned.
- Calculating 'off' may end up sacrificing first block_sz area of
  memzone area x. So total number of the object which can fit in the
  pool area is n-1, Which is incorrect behavior.

Therefore we request one additional object (/block_sz area) from memzone
when F_BLK_SZ_ALIGNED flag is set.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v1 -- v2:
- patch description changed.
- Removed elseif brakcet mix
- removed sanity check for alignment
- removed extra var delta
- Removed __rte_unused from xmem_usage/size and added _BLK_SZ_ALIGN check.
Refer v1 review comment [1].
[1] http://dpdk.org/dev/patchwork/patch/25605/

 lib/librte_mempool/rte_mempool.c | 16 +++++++++++++---
 lib/librte_mempool/rte_mempool.h |  1 +
 2 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 19e5e6ddf..7610f0d1f 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -239,10 +239,14 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  */
 size_t
 rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
-		      __rte_unused const struct rte_mempool *mp)
+		      const struct rte_mempool *mp)
 {
 	size_t obj_per_page, pg_num, pg_sz;
 
+	if (mp && mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED)
+		/* alignment need one additional object */
+		elt_num += 1;
+
 	if (total_elt_sz == 0)
 		return 0;
 
@@ -265,13 +269,16 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
 ssize_t
 rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const phys_addr_t paddr[], uint32_t pg_num,
-	uint32_t pg_shift, __rte_unused const struct rte_mempool *mp)
+	uint32_t pg_shift, const struct rte_mempool *mp)
 {
 	uint32_t elt_cnt = 0;
 	phys_addr_t start, end;
 	uint32_t paddr_idx;
 	size_t pg_sz = (size_t)1 << pg_shift;
 
+	if (mp && mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED)
+		/* alignment need one additional object */
+		elt_num += 1;
 
 	/* if paddr is NULL, assume contiguous memory */
 	if (paddr == NULL) {
@@ -389,7 +396,10 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	memhdr->free_cb = free_cb;
 	memhdr->opaque = opaque;
 
-	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+	if (mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED)
+		/* align object start address to a multiple of total_elt_sz */
+		off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
+	else if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
 		off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
 	else
 		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index a4bfdb56e..d7c2416f4 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -266,6 +266,7 @@ struct rte_mempool {
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
 #define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040 /**< Detect physcially contiguous objs */
+#define MEMPOOL_F_POOL_BLK_SZ_ALIGNED 0x0080 /**< Align obj start address to total elem size */
 
 /**
  * @internal When debug is enabled, store some statistics.
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v3 6/6] mempool: update range info to pool
  2017-07-20 13:47   ` [PATCH v3 " Santosh Shukla
                       ` (4 preceding siblings ...)
  2017-07-20 13:47     ` [PATCH v3 5/6] mempool: introduce block size align flag Santosh Shukla
@ 2017-07-20 13:47     ` Santosh Shukla
  2017-08-15  6:07     ` [PATCH v4 0/7] Infrastructure to support octeontx HW mempool manager Santosh Shukla
  6 siblings, 0 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-07-20 13:47 UTC (permalink / raw)
  To: thomas, dev, olivier.matz; +Cc: jerin.jacob, hemant.agrawal, Santosh Shukla

HW pool manager e.g. Octeontx SoC demands s/w to program start and end
address of pool. Currently, there is no such handle in external mempool.
Introducing rte_mempool_update_range handle which will let HW(pool
manager)
know when common layer selects hugepage:
For each hugepage - update its start/end address to HW pool manager.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v1 -- v2:
- Added RTE_FUNC_PTR_OR_RET

 lib/librte_mempool/rte_mempool.c           |  3 +++
 lib/librte_mempool/rte_mempool.h           | 22 ++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c       | 13 +++++++++++++
 lib/librte_mempool/rte_mempool_version.map |  1 +
 4 files changed, 39 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 7610f0d1f..df7996df8 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -363,6 +363,9 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	struct rte_mempool_memhdr *memhdr;
 	int ret;
 
+	/* update range info to mempool */
+	rte_mempool_ops_update_range(mp, vaddr, paddr, len);
+
 	/* create the internal ring if not already done */
 	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
 		ret = rte_mempool_ops_alloc(mp);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index d7c2416f4..b59a522cd 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -396,6 +396,11 @@ typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
  */
 typedef int (*rte_mempool_get_capabilities_t)(struct rte_mempool *mp);
 
+/**
+ * Update range info to mempool.
+ */
+typedef void (*rte_mempool_update_range_t)(const struct rte_mempool *mp,
+		char *vaddr, phys_addr_t paddr, size_t len);
 
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
@@ -406,6 +411,7 @@ struct rte_mempool_ops {
 	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
 	rte_mempool_get_count get_count; /**< Get qty of available objs. */
 	rte_mempool_get_capabilities_t get_capabilities; /**< Get capability */
+	rte_mempool_update_range_t update_range; /**< Update range to mempool */
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -531,6 +537,22 @@ int
 rte_mempool_ops_get_capabilities(struct rte_mempool *mp);
 
 /**
+ * @internal wrapper for mempool_ops update_range callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param vaddr
+ *   Pointer to the buffer virtual address
+ * @param paddr
+ *   Pointer to the buffer physical address
+ * @param len
+ *   Pool size
+ */
+void
+rte_mempool_ops_update_range(const struct rte_mempool *mp,
+				char *vaddr, phys_addr_t paddr, size_t len);
+
+/**
  * @internal wrapper for mempool_ops free callback.
  *
  * @param mp
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 31a73cc9a..7bb52b3ca 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -87,6 +87,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->dequeue = h->dequeue;
 	ops->get_count = h->get_count;
 	ops->get_capabilities = h->get_capabilities;
+	ops->update_range = h->update_range;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -137,6 +138,18 @@ rte_mempool_ops_get_capabilities(struct rte_mempool *mp)
 	return ops->get_capabilities(mp);
 }
 
+/* wrapper to update range info to external mempool */
+void
+rte_mempool_ops_update_range(const struct rte_mempool *mp, char *vaddr,
+			     phys_addr_t paddr, size_t len)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+	RTE_FUNC_PTR_OR_RET(ops->update_range);
+	ops->update_range(mp, vaddr, paddr, len);
+}
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 3c3471507..2663001c3 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -46,5 +46,6 @@ DPDK_17.11 {
 	global:
 
 	rte_mempool_ops_get_capabilities;
+	rte_mempool_ops_update_range;
 
 } DPDK_16.07;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v4 0/7] Infrastructure to support octeontx HW mempool manager
  2017-07-20 13:47   ` [PATCH v3 " Santosh Shukla
                       ` (5 preceding siblings ...)
  2017-07-20 13:47     ` [PATCH v3 6/6] mempool: update range info to pool Santosh Shukla
@ 2017-08-15  6:07     ` Santosh Shukla
  2017-08-15  6:07       ` [PATCH v4 1/7] mempool: fix flags data type Santosh Shukla
                         ` (7 more replies)
  6 siblings, 8 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-08-15  6:07 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

v4:
Include 
- mempool deprecation changes, refer [1], 
- patches are rebased against v17.11-rc0.

v3:
In order to support octeontx HW mempool manager, the common mempool layer must
meet below condition.
- Object start address should be block size (total elem size) aligned.
- Object must have the physically contiguous address within the pool.

And right now mempool doesn't support both.

Patchset adds infrastrucure to support both condition in a _generic_ way.
Proposed solution won't effect existing mempool drivers or its functionality.

Summary:
Introducing capability flag. Now mempool drivers can advertise their
capabilities to common mempool layer(at the pool creation time).
Handlers are introduced in order to support capability flag.

Flags:
* MEMPOOL_F_CAPA_PHYS_CONTIG - If flag is set then Detect whether the object
has the physically contiguous address with in a hugepage.

* MEMPOOL_F_POOL_BLK_SZ_ALIGNED - If flag is set then make sure that object
addresses are block size aligned.

API:
Two handles are introduced:
* rte_mempool_ops_get_capability - advertise mempool manager capability.
* rte_mempool_ops_update_range - Update start and end address range to
HW mempool manager.

v3 --> v4:
* [01 - 02 - 03/07] mempool deprecation notice changes.
* [04 - 05 - 06 - 07/07] are v3 patches.

v2 --> v3:
(Note: v3 work is based on deprecation notice [1], It's for 17.11)
* Changed _version.map from 17.08 to 17.11.
* build fixes reported by stv_sys.
* Patchset rebased on upstream commit: da94a999.


v1 --> v2 :
* [01/06] Per deprecation notice [1], Changed rte_mempool 'flag'
  data type from int to unsigned int and removed flag param
  from _xmem_size/usage api.
* [02/06] Incorporated review feedback from v1 [2] (Suggested by Olivier)
* [03/06] Renamed flag to MEMPOOL_F_CAPA_PHYS_CONTIG
  and comment reworded. (Suggested by Olivier per v1 [3])
* [04/06] added new mempool arg in xmem_size/usage. (Suggested by Olivier)
* [05/06] patch description changed.
        - Removed elseif brakcet mix
        - removed sanity check for alignment
        - removed extra var delta
        - Removed __rte_unused from xmem_usage/size and added _BLK_SZ_ALIGN check.
        (Suggeted by Olivier per v1[4])
* [06/06] Added RTE_FUNC_PTR_OR_RET in rte_mempool_ops_update_ops.

Checkpatch status:
* WARNING: line over 80 characters
Noticed for debug messages.

Work history:
For v1, refer [5].

Thanks.

[1] deprecation notice v2: http://dpdk.org/dev/patchwork/patch/27079/
[2] v1: http://dpdk.org/dev/patchwork/patch/25603/
[3] v1: http://dpdk.org/dev/patchwork/patch/25604/
[4] v1: http://dpdk.org/dev/patchwork/patch/25605/
[5] v1: http://dev.dpdk.narkive.com/Qcu55Lgz/dpdk-dev-patch-0-4-infrastructure-to-support-octeontx-hw-mempool-manager


Santosh Shukla (7):
  mempool: fix flags data type
  mempool: add mempool arg in xmem size and usage
  doc: remove mempool api change notice
  mempool: get the mempool capability
  mempool: detect physical contiguous object in pool
  mempool: introduce block size align flag
  mempool: update range info to pool

 doc/guides/rel_notes/deprecation.rst       |  9 ----
 doc/guides/rel_notes/release_17_11.rst     |  7 +++
 drivers/net/xenvirt/rte_mempool_gntalloc.c |  5 +-
 lib/librte_mempool/rte_mempool.c           | 42 ++++++++++++++---
 lib/librte_mempool/rte_mempool.h           | 75 ++++++++++++++++++++++--------
 lib/librte_mempool/rte_mempool_ops.c       | 27 +++++++++++
 lib/librte_mempool/rte_mempool_version.map |  8 ++++
 test/test/test_mempool.c                   | 22 ++++-----
 test/test/test_mempool_perf.c              |  4 +-
 9 files changed, 148 insertions(+), 51 deletions(-)

-- 
2.11.0

^ permalink raw reply	[flat|nested] 116+ messages in thread

* [PATCH v4 1/7] mempool: fix flags data type
  2017-08-15  6:07     ` [PATCH v4 0/7] Infrastructure to support octeontx HW mempool manager Santosh Shukla
@ 2017-08-15  6:07       ` Santosh Shukla
  2017-09-04 14:11         ` Olivier MATZ
  2017-08-15  6:07       ` [PATCH v4 2/7] mempool: add mempool arg in xmem size and usage Santosh Shukla
                         ` (6 subsequent siblings)
  7 siblings, 1 reply; 116+ messages in thread
From: Santosh Shukla @ 2017-08-15  6:07 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

mp->flags is int and mempool API updates unsigned int
value in 'flags', so fix the 'flags' data type.

Patch also does mp->flags cleanup like:
* Remove redundant 'flags' API description from
  - __rte_mempool_generic_put
  - __rte_mempool_generic_get

* Remove unused 'flags' param from
  - rte_mempool_generic_put
  - rte_mempool_generic_get

* Fix mempool var data types int mempool.c
  - mz_flags is int, Change it to unsigned int.

Fixes: af75078fec ("first public release")
Fixes: 454a0a7009 ("mempool: use cache in single producer or consumer mode")
Fixes: d6f78df6fe ("mempool: use bit flags for multi consumers and producers")
Fixes: d1d914ebbc ("mempool: allocate in several memory chunks by default")

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
---
 lib/librte_mempool/rte_mempool.c |  4 ++--
 lib/librte_mempool/rte_mempool.h | 23 +++++------------------
 test/test/test_mempool.c         | 18 +++++++++---------
 test/test/test_mempool_perf.c    |  4 ++--
 4 files changed, 18 insertions(+), 31 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 6fc3c9c7c..237665c65 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -515,7 +515,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 int
 rte_mempool_populate_default(struct rte_mempool *mp)
 {
-	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
+	unsigned int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	const struct rte_memzone *mz;
 	size_t size, total_elt_sz, align, pg_sz, pg_shift;
@@ -742,7 +742,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 	struct rte_tailq_entry *te = NULL;
 	const struct rte_memzone *mz = NULL;
 	size_t mempool_size;
-	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
+	unsigned int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
 	struct rte_mempool_objsz objsz;
 	unsigned lcore_id;
 	int ret;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 76b5b3b15..bd7be2319 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -226,7 +226,7 @@ struct rte_mempool {
 	};
 	void *pool_config;               /**< optional args for ops alloc. */
 	const struct rte_memzone *mz;    /**< Memzone where pool is alloc'd. */
-	int flags;                       /**< Flags of the mempool. */
+	unsigned int flags;              /**< Flags of the mempool. */
 	int socket_id;                   /**< Socket id passed at create. */
 	uint32_t size;                   /**< Max size of the mempool. */
 	uint32_t cache_size;
@@ -1034,9 +1034,6 @@ rte_mempool_default_cache(struct rte_mempool *mp, unsigned lcore_id)
  *   positive.
  * @param cache
  *   A pointer to a mempool cache structure. May be NULL if not needed.
- * @param flags
- *   The flags used for the mempool creation.
- *   Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers.
  */
 static __rte_always_inline void
 __mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
@@ -1096,14 +1093,10 @@ __mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
  *   The number of objects to add in the mempool from the obj_table.
  * @param cache
  *   A pointer to a mempool cache structure. May be NULL if not needed.
- * @param flags
- *   The flags used for the mempool creation.
- *   Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers.
  */
 static __rte_always_inline void
 rte_mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
-			unsigned n, struct rte_mempool_cache *cache,
-			__rte_unused int flags)
+			unsigned n, struct rte_mempool_cache *cache)
 {
 	__mempool_check_cookies(mp, obj_table, n, 0);
 	__mempool_generic_put(mp, obj_table, n, cache);
@@ -1129,7 +1122,7 @@ rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
 {
 	struct rte_mempool_cache *cache;
 	cache = rte_mempool_default_cache(mp, rte_lcore_id());
-	rte_mempool_generic_put(mp, obj_table, n, cache, mp->flags);
+	rte_mempool_generic_put(mp, obj_table, n, cache);
 }
 
 /**
@@ -1160,9 +1153,6 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
  *   The number of objects to get, must be strictly positive.
  * @param cache
  *   A pointer to a mempool cache structure. May be NULL if not needed.
- * @param flags
- *   The flags used for the mempool creation.
- *   Single-consumer (MEMPOOL_F_SC_GET flag) or multi-consumers.
  * @return
  *   - >=0: Success; number of objects supplied.
  *   - <0: Error; code of ring dequeue function.
@@ -1241,16 +1231,13 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table,
  *   The number of objects to get from mempool to obj_table.
  * @param cache
  *   A pointer to a mempool cache structure. May be NULL if not needed.
- * @param flags
- *   The flags used for the mempool creation.
- *   Single-consumer (MEMPOOL_F_SC_GET flag) or multi-consumers.
  * @return
  *   - 0: Success; objects taken.
  *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
  */
 static __rte_always_inline int
 rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n,
-			struct rte_mempool_cache *cache, __rte_unused int flags)
+			struct rte_mempool_cache *cache)
 {
 	int ret;
 	ret = __mempool_generic_get(mp, obj_table, n, cache);
@@ -1286,7 +1273,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
 {
 	struct rte_mempool_cache *cache;
 	cache = rte_mempool_default_cache(mp, rte_lcore_id());
-	return rte_mempool_generic_get(mp, obj_table, n, cache, mp->flags);
+	return rte_mempool_generic_get(mp, obj_table, n, cache);
 }
 
 /**
diff --git a/test/test/test_mempool.c b/test/test/test_mempool.c
index 0a4423954..47dc3ac5f 100644
--- a/test/test/test_mempool.c
+++ b/test/test/test_mempool.c
@@ -129,7 +129,7 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 	rte_mempool_dump(stdout, mp);
 
 	printf("get an object\n");
-	if (rte_mempool_generic_get(mp, &obj, 1, cache, 0) < 0)
+	if (rte_mempool_generic_get(mp, &obj, 1, cache) < 0)
 		GOTO_ERR(ret, out);
 	rte_mempool_dump(stdout, mp);
 
@@ -152,21 +152,21 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 #endif
 
 	printf("put the object back\n");
-	rte_mempool_generic_put(mp, &obj, 1, cache, 0);
+	rte_mempool_generic_put(mp, &obj, 1, cache);
 	rte_mempool_dump(stdout, mp);
 
 	printf("get 2 objects\n");
-	if (rte_mempool_generic_get(mp, &obj, 1, cache, 0) < 0)
+	if (rte_mempool_generic_get(mp, &obj, 1, cache) < 0)
 		GOTO_ERR(ret, out);
-	if (rte_mempool_generic_get(mp, &obj2, 1, cache, 0) < 0) {
-		rte_mempool_generic_put(mp, &obj, 1, cache, 0);
+	if (rte_mempool_generic_get(mp, &obj2, 1, cache) < 0) {
+		rte_mempool_generic_put(mp, &obj, 1, cache);
 		GOTO_ERR(ret, out);
 	}
 	rte_mempool_dump(stdout, mp);
 
 	printf("put the objects back\n");
-	rte_mempool_generic_put(mp, &obj, 1, cache, 0);
-	rte_mempool_generic_put(mp, &obj2, 1, cache, 0);
+	rte_mempool_generic_put(mp, &obj, 1, cache);
+	rte_mempool_generic_put(mp, &obj2, 1, cache);
 	rte_mempool_dump(stdout, mp);
 
 	/*
@@ -178,7 +178,7 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 		GOTO_ERR(ret, out);
 
 	for (i = 0; i < MEMPOOL_SIZE; i++) {
-		if (rte_mempool_generic_get(mp, &objtable[i], 1, cache, 0) < 0)
+		if (rte_mempool_generic_get(mp, &objtable[i], 1, cache) < 0)
 			break;
 	}
 
@@ -200,7 +200,7 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 				ret = -1;
 		}
 
-		rte_mempool_generic_put(mp, &objtable[i], 1, cache, 0);
+		rte_mempool_generic_put(mp, &objtable[i], 1, cache);
 	}
 
 	free(objtable);
diff --git a/test/test/test_mempool_perf.c b/test/test/test_mempool_perf.c
index 07b28c066..3b8f7de7c 100644
--- a/test/test/test_mempool_perf.c
+++ b/test/test/test_mempool_perf.c
@@ -186,7 +186,7 @@ per_lcore_mempool_test(void *arg)
 				ret = rte_mempool_generic_get(mp,
 							      &obj_table[idx],
 							      n_get_bulk,
-							      cache, 0);
+							      cache);
 				if (unlikely(ret < 0)) {
 					rte_mempool_dump(stdout, mp);
 					/* in this case, objects are lost... */
@@ -200,7 +200,7 @@ per_lcore_mempool_test(void *arg)
 			while (idx < n_keep) {
 				rte_mempool_generic_put(mp, &obj_table[idx],
 							n_put_bulk,
-							cache, 0);
+							cache);
 				idx += n_put_bulk;
 			}
 		}
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v4 2/7] mempool: add mempool arg in xmem size and usage
  2017-08-15  6:07     ` [PATCH v4 0/7] Infrastructure to support octeontx HW mempool manager Santosh Shukla
  2017-08-15  6:07       ` [PATCH v4 1/7] mempool: fix flags data type Santosh Shukla
@ 2017-08-15  6:07       ` Santosh Shukla
  2017-09-04 14:22         ` Olivier MATZ
  2017-08-15  6:07       ` [PATCH v4 3/7] doc: remove mempool api change notice Santosh Shukla
                         ` (5 subsequent siblings)
  7 siblings, 1 reply; 116+ messages in thread
From: Santosh Shukla @ 2017-08-15  6:07 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

xmem_size and xmem_usage need to know the status of mp->flag.
Following patch will make use of that.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
---
 drivers/net/xenvirt/rte_mempool_gntalloc.c |  5 +++--
 lib/librte_mempool/rte_mempool.c           | 10 ++++++----
 lib/librte_mempool/rte_mempool.h           |  8 ++++++--
 test/test/test_mempool.c                   |  4 ++--
 4 files changed, 17 insertions(+), 10 deletions(-)

diff --git a/drivers/net/xenvirt/rte_mempool_gntalloc.c b/drivers/net/xenvirt/rte_mempool_gntalloc.c
index 73e82f808..ee0bda459 100644
--- a/drivers/net/xenvirt/rte_mempool_gntalloc.c
+++ b/drivers/net/xenvirt/rte_mempool_gntalloc.c
@@ -114,7 +114,7 @@ _create_mempool(const char *name, unsigned elt_num, unsigned elt_size,
 	pg_shift = rte_bsf32(pg_sz);
 
 	rte_mempool_calc_obj_size(elt_size, flags, &objsz);
-	sz = rte_mempool_xmem_size(elt_num, objsz.total_size, pg_shift);
+	sz = rte_mempool_xmem_size(elt_num, objsz.total_size, pg_shift, NULL);
 	pg_num = sz >> pg_shift;
 
 	pa_arr = calloc(pg_num, sizeof(pa_arr[0]));
@@ -162,7 +162,8 @@ _create_mempool(const char *name, unsigned elt_num, unsigned elt_size,
 	 * Check that allocated size is big enough to hold elt_num
 	 * objects and a calcualte how many bytes are actually required.
 	 */
-	usz = rte_mempool_xmem_usage(va, elt_num, objsz.total_size, pa_arr, pg_num, pg_shift);
+	usz = rte_mempool_xmem_usage(va, elt_num, objsz.total_size, pa_arr,
+				     pg_num, pg_shift, NULL);
 	if (usz < 0) {
 		mp = NULL;
 		i = pg_num;
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 237665c65..f95c01c00 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -238,7 +238,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  * Calculate maximum amount of memory required to store given number of objects.
  */
 size_t
-rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift)
+rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
+		      __rte_unused const struct rte_mempool *mp)
 {
 	size_t obj_per_page, pg_num, pg_sz;
 
@@ -264,13 +265,14 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift)
 ssize_t
 rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const phys_addr_t paddr[], uint32_t pg_num,
-	uint32_t pg_shift)
+	uint32_t pg_shift, __rte_unused const struct rte_mempool *mp)
 {
 	uint32_t elt_cnt = 0;
 	phys_addr_t start, end;
 	uint32_t paddr_idx;
 	size_t pg_sz = (size_t)1 << pg_shift;
 
+
 	/* if paddr is NULL, assume contiguous memory */
 	if (paddr == NULL) {
 		start = 0;
@@ -543,7 +545,7 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 	for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) {
-		size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift);
+		size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift, mp);
 
 		ret = snprintf(mz_name, sizeof(mz_name),
 			RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id);
@@ -600,7 +602,7 @@ get_anon_size(const struct rte_mempool *mp)
 	pg_sz = getpagesize();
 	pg_shift = rte_bsf32(pg_sz);
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
-	size = rte_mempool_xmem_size(mp->size, total_elt_sz, pg_shift);
+	size = rte_mempool_xmem_size(mp->size, total_elt_sz, pg_shift, mp);
 
 	return size;
 }
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index bd7be2319..74e91d34f 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -1476,11 +1476,13 @@ uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  *   by rte_mempool_calc_obj_size().
  * @param pg_shift
  *   LOG2 of the physical pages size. If set to 0, ignore page boundaries.
+ * @param mp
+ *  A pointer to the mempool structure.
  * @return
  *   Required memory size aligned at page boundary.
  */
 size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
-	uint32_t pg_shift);
+	uint32_t pg_shift, const struct rte_mempool *mp);
 
 /**
  * Get the size of memory required to store mempool elements.
@@ -1503,6 +1505,8 @@ size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
  *   Number of elements in the paddr array.
  * @param pg_shift
  *   LOG2 of the physical pages size.
+ * @param mp
+ *  A pointer to the mempool structure.
  * @return
  *   On success, the number of bytes needed to store given number of
  *   objects, aligned to the given page size. If the provided memory
@@ -1511,7 +1515,7 @@ size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
  */
 ssize_t rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const phys_addr_t paddr[], uint32_t pg_num,
-	uint32_t pg_shift);
+	uint32_t pg_shift, const struct rte_mempool *mp);
 
 /**
  * Walk list of all memory pools
diff --git a/test/test/test_mempool.c b/test/test/test_mempool.c
index 47dc3ac5f..1eb81081c 100644
--- a/test/test/test_mempool.c
+++ b/test/test/test_mempool.c
@@ -485,10 +485,10 @@ test_mempool_xmem_misc(void)
 
 	elt_num = MAX_KEEP;
 	total_size = rte_mempool_calc_obj_size(MEMPOOL_ELT_SIZE, 0, NULL);
-	sz = rte_mempool_xmem_size(elt_num, total_size, MEMPOOL_PG_SHIFT_MAX);
+	sz = rte_mempool_xmem_size(elt_num, total_size, MEMPOOL_PG_SHIFT_MAX, NULL);
 
 	usz = rte_mempool_xmem_usage(NULL, elt_num, total_size, 0, 1,
-		MEMPOOL_PG_SHIFT_MAX);
+		MEMPOOL_PG_SHIFT_MAX, NULL);
 
 	if (sz != (size_t)usz)  {
 		printf("failure @ %s: rte_mempool_xmem_usage(%u, %u) "
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v4 3/7] doc: remove mempool api change notice
  2017-08-15  6:07     ` [PATCH v4 0/7] Infrastructure to support octeontx HW mempool manager Santosh Shukla
  2017-08-15  6:07       ` [PATCH v4 1/7] mempool: fix flags data type Santosh Shukla
  2017-08-15  6:07       ` [PATCH v4 2/7] mempool: add mempool arg in xmem size and usage Santosh Shukla
@ 2017-08-15  6:07       ` Santosh Shukla
  2017-08-15  6:07       ` [PATCH v4 4/7] mempool: get the mempool capability Santosh Shukla
                         ` (4 subsequent siblings)
  7 siblings, 0 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-08-15  6:07 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

Removed mempool api change deprecation notice and
updated change info in release_17.11.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
---
 doc/guides/rel_notes/deprecation.rst   | 9 ---------
 doc/guides/rel_notes/release_17_11.rst | 7 +++++++
 2 files changed, 7 insertions(+), 9 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 3362f3350..0e4cb1f95 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -44,15 +44,6 @@ Deprecation Notices
   PKT_RX_QINQ_STRIPPED, that are better described. The old flags and
   their behavior will be kept until 17.08 and will be removed in 17.11.
 
-* mempool: The following will be modified in 17.11:
-
-  - ``rte_mempool_xmem_size`` and ``rte_mempool_xmem_usage`` need to know
-    the mempool flag status so adding new param rte_mempool in those API.
-  - Removing __rte_unused int flag param from ``rte_mempool_generic_put``
-    and ``rte_mempool_generic_get`` API.
-  - ``rte_mempool`` flags data type will changed from int to
-    unsigned int.
-
 * ethdev: Tx offloads will no longer be enabled by default in 17.11.
   Instead, the ``rte_eth_txmode`` structure will be extended with
   bit field to enable each Tx offload.
diff --git a/doc/guides/rel_notes/release_17_11.rst b/doc/guides/rel_notes/release_17_11.rst
index 170f4f916..055ba10a4 100644
--- a/doc/guides/rel_notes/release_17_11.rst
+++ b/doc/guides/rel_notes/release_17_11.rst
@@ -110,6 +110,13 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =========================================================
 
+* **The following changes made in mempool library**
+
+  * Moved ``flag`` datatype from int to unsigned int for ``rte_mempool``.
+  * Removed ``__rte_unused int flag`` param from ``rte_mempool_generic_put``
+    and ``rte_mempool_generic_get`` API.
+  * Added ``rte_mempool`` param in ``rte_mempool_xmem_size`` and
+    ``rte_mempool_xmem_usage``.
 
 ABI Changes
 -----------
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v4 4/7] mempool: get the mempool capability
  2017-08-15  6:07     ` [PATCH v4 0/7] Infrastructure to support octeontx HW mempool manager Santosh Shukla
                         ` (2 preceding siblings ...)
  2017-08-15  6:07       ` [PATCH v4 3/7] doc: remove mempool api change notice Santosh Shukla
@ 2017-08-15  6:07       ` Santosh Shukla
  2017-09-04 14:32         ` Olivier MATZ
  2017-08-15  6:07       ` [PATCH v4 5/7] mempool: detect physical contiguous object in pool Santosh Shukla
                         ` (3 subsequent siblings)
  7 siblings, 1 reply; 116+ messages in thread
From: Santosh Shukla @ 2017-08-15  6:07 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

Allow mempool to advertise its capability.
A handler been introduced called rte_mempool_ops_get_capabilities.
- Upon ->get_capabilities call, mempool driver will advertise
capability by updating to 'mp->flags'.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 lib/librte_mempool/rte_mempool.c           |  5 +++++
 lib/librte_mempool/rte_mempool.h           | 20 ++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c       | 14 ++++++++++++++
 lib/librte_mempool/rte_mempool_version.map |  7 +++++++
 4 files changed, 46 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index f95c01c00..d518c53de 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -529,6 +529,11 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	if (mp->nb_mem_chunks != 0)
 		return -EEXIST;
 
+	/* Get mempool capability */
+	ret = rte_mempool_ops_get_capabilities(mp);
+	if (ret)
+		RTE_LOG(DEBUG, MEMPOOL, "get_capability not supported for %s\n", mp->name);
+
 	if (rte_xen_dom0_supported()) {
 		pg_sz = RTE_PGSIZE_2M;
 		pg_shift = rte_bsf32(pg_sz);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 74e91d34f..bc4a1dac7 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -389,6 +389,12 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
  */
 typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
 
+/**
+ * Get the mempool capability.
+ */
+typedef int (*rte_mempool_get_capabilities_t)(struct rte_mempool *mp);
+
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -397,6 +403,7 @@ struct rte_mempool_ops {
 	rte_mempool_enqueue_t enqueue;   /**< Enqueue an object. */
 	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
 	rte_mempool_get_count get_count; /**< Get qty of available objs. */
+	rte_mempool_get_capabilities_t get_capabilities; /**< Get capability */
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -508,6 +515,19 @@ rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
 unsigned
 rte_mempool_ops_get_count(const struct rte_mempool *mp);
 
+
+/**
+ * @internal wrapper for mempool_ops get_capabilities callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @return
+ *   - 0: Success; Capability updated to mp->flags
+ *   - <0: Error; code of capability function.
+ */
+int
+rte_mempool_ops_get_capabilities(struct rte_mempool *mp);
+
 /**
  * @internal wrapper for mempool_ops free callback.
  *
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 5f24de250..31a73cc9a 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -37,6 +37,7 @@
 
 #include <rte_mempool.h>
 #include <rte_errno.h>
+#include <rte_dev.h>
 
 /* indirect jump table to support external memory pools. */
 struct rte_mempool_ops_table rte_mempool_ops_table = {
@@ -85,6 +86,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->enqueue = h->enqueue;
 	ops->dequeue = h->dequeue;
 	ops->get_count = h->get_count;
+	ops->get_capabilities = h->get_capabilities;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -123,6 +125,18 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp)
 	return ops->get_count(mp);
 }
 
+/* wrapper to get external mempool capability. */
+int
+rte_mempool_ops_get_capabilities(struct rte_mempool *mp)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->get_capabilities, -ENOTSUP);
+	return ops->get_capabilities(mp);
+}
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index f9c079447..3c3471507 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -41,3 +41,10 @@ DPDK_16.07 {
 	rte_mempool_set_ops_byname;
 
 } DPDK_2.0;
+
+DPDK_17.11 {
+	global:
+
+	rte_mempool_ops_get_capabilities;
+
+} DPDK_16.07;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v4 5/7] mempool: detect physical contiguous object in pool
  2017-08-15  6:07     ` [PATCH v4 0/7] Infrastructure to support octeontx HW mempool manager Santosh Shukla
                         ` (3 preceding siblings ...)
  2017-08-15  6:07       ` [PATCH v4 4/7] mempool: get the mempool capability Santosh Shukla
@ 2017-08-15  6:07       ` Santosh Shukla
  2017-09-04 14:43         ` Olivier MATZ
  2017-08-15  6:07       ` [PATCH v4 6/7] mempool: introduce block size align flag Santosh Shukla
                         ` (2 subsequent siblings)
  7 siblings, 1 reply; 116+ messages in thread
From: Santosh Shukla @ 2017-08-15  6:07 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

The memory area containing all the objects must be physically
contiguous.
Introducing MEMPOOL_F_CAPA_PHYS_CONTIG flag for such use-case.

The flag useful to detect whether pool area has sufficient space
to fit all objects. If not then return -ENOSPC.
This way, we make sure that all object within a pool is contiguous.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 lib/librte_mempool/rte_mempool.c | 8 ++++++++
 lib/librte_mempool/rte_mempool.h | 1 +
 2 files changed, 9 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index d518c53de..19e5e6ddf 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -370,6 +370,14 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 
+	/* Detect pool area has sufficient space for elements */
+	if (mp->flags & MEMPOOL_F_CAPA_PHYS_CONTIG) {
+		if (len < total_elt_sz * mp->size) {
+			RTE_LOG(ERR, MEMPOOL, "pool area %" PRIx64 " not enough\n", (uint64_t)len);
+			return -ENOSPC;
+		}
+	}
+
 	memhdr = rte_zmalloc("MEMPOOL_MEMHDR", sizeof(*memhdr), 0);
 	if (memhdr == NULL)
 		return -ENOMEM;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index bc4a1dac7..a4bfdb56e 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -265,6 +265,7 @@ struct rte_mempool {
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
+#define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040 /**< Detect physcially contiguous objs */
 
 /**
  * @internal When debug is enabled, store some statistics.
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v4 6/7] mempool: introduce block size align flag
  2017-08-15  6:07     ` [PATCH v4 0/7] Infrastructure to support octeontx HW mempool manager Santosh Shukla
                         ` (4 preceding siblings ...)
  2017-08-15  6:07       ` [PATCH v4 5/7] mempool: detect physical contiguous object in pool Santosh Shukla
@ 2017-08-15  6:07       ` Santosh Shukla
  2017-09-04 16:20         ` Olivier MATZ
  2017-08-15  6:07       ` [PATCH v4 7/7] mempool: update range info to pool Santosh Shukla
  2017-09-06 11:28       ` [PATCH v5 0/8] Infrastructure to support octeontx HW mempool manager Santosh Shukla
  7 siblings, 1 reply; 116+ messages in thread
From: Santosh Shukla @ 2017-08-15  6:07 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

Some mempool hw like octeontx/fpa block, demands block size
(/total_elem_sz) aligned object start address.

Introducing an MEMPOOL_F_POOL_BLK_SZ_ALIGNED flag.
If this flag is set:
- Align object start address to a multiple of total_elt_sz.
- Allocate one additional object. Additional object is needed to make
  sure that requested 'n' object gets correctly populated. Example:

- Let's say that we get 'x' size of memory chunk from memzone.
- And application has requested 'n' object from mempool.
- Ideally, we start using objects at start address 0 to...(x-block_sz)
  for n obj.
- Not necessarily first object address i.e. 0 is aligned to block_sz.
- So we derive 'offset' value for block_sz alignment purpose i.e..'off'.
- That 'off' makes sure that start address of object is blk_sz
  aligned.
- Calculating 'off' may end up sacrificing first block_sz area of
  memzone area x. So total number of the object which can fit in the
  pool area is n-1, Which is incorrect behavior.

Therefore we request one additional object (/block_sz area) from memzone
when F_BLK_SZ_ALIGNED flag is set.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 lib/librte_mempool/rte_mempool.c | 16 +++++++++++++---
 lib/librte_mempool/rte_mempool.h |  1 +
 2 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 19e5e6ddf..7610f0d1f 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -239,10 +239,14 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  */
 size_t
 rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
-		      __rte_unused const struct rte_mempool *mp)
+		      const struct rte_mempool *mp)
 {
 	size_t obj_per_page, pg_num, pg_sz;
 
+	if (mp && mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED)
+		/* alignment need one additional object */
+		elt_num += 1;
+
 	if (total_elt_sz == 0)
 		return 0;
 
@@ -265,13 +269,16 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
 ssize_t
 rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const phys_addr_t paddr[], uint32_t pg_num,
-	uint32_t pg_shift, __rte_unused const struct rte_mempool *mp)
+	uint32_t pg_shift, const struct rte_mempool *mp)
 {
 	uint32_t elt_cnt = 0;
 	phys_addr_t start, end;
 	uint32_t paddr_idx;
 	size_t pg_sz = (size_t)1 << pg_shift;
 
+	if (mp && mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED)
+		/* alignment need one additional object */
+		elt_num += 1;
 
 	/* if paddr is NULL, assume contiguous memory */
 	if (paddr == NULL) {
@@ -389,7 +396,10 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	memhdr->free_cb = free_cb;
 	memhdr->opaque = opaque;
 
-	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+	if (mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED)
+		/* align object start address to a multiple of total_elt_sz */
+		off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
+	else if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
 		off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
 	else
 		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index a4bfdb56e..d7c2416f4 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -266,6 +266,7 @@ struct rte_mempool {
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
 #define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040 /**< Detect physcially contiguous objs */
+#define MEMPOOL_F_POOL_BLK_SZ_ALIGNED 0x0080 /**< Align obj start address to total elem size */
 
 /**
  * @internal When debug is enabled, store some statistics.
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v4 7/7] mempool: update range info to pool
  2017-08-15  6:07     ` [PATCH v4 0/7] Infrastructure to support octeontx HW mempool manager Santosh Shukla
                         ` (5 preceding siblings ...)
  2017-08-15  6:07       ` [PATCH v4 6/7] mempool: introduce block size align flag Santosh Shukla
@ 2017-08-15  6:07       ` Santosh Shukla
  2017-09-06 11:28       ` [PATCH v5 0/8] Infrastructure to support octeontx HW mempool manager Santosh Shukla
  7 siblings, 0 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-08-15  6:07 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

HW pool manager e.g. Octeontx SoC demands s/w to program start and end
address of pool. Currently, there is no such handle in external mempool.
Introducing rte_mempool_update_range handle which will let HW(pool
manager)
know when common layer selects hugepage:
For each hugepage - update its start/end address to HW pool manager.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 lib/librte_mempool/rte_mempool.c           |  3 +++
 lib/librte_mempool/rte_mempool.h           | 22 ++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c       | 13 +++++++++++++
 lib/librte_mempool/rte_mempool_version.map |  1 +
 4 files changed, 39 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 7610f0d1f..df7996df8 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -363,6 +363,9 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	struct rte_mempool_memhdr *memhdr;
 	int ret;
 
+	/* update range info to mempool */
+	rte_mempool_ops_update_range(mp, vaddr, paddr, len);
+
 	/* create the internal ring if not already done */
 	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
 		ret = rte_mempool_ops_alloc(mp);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index d7c2416f4..b59a522cd 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -396,6 +396,11 @@ typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
  */
 typedef int (*rte_mempool_get_capabilities_t)(struct rte_mempool *mp);
 
+/**
+ * Update range info to mempool.
+ */
+typedef void (*rte_mempool_update_range_t)(const struct rte_mempool *mp,
+		char *vaddr, phys_addr_t paddr, size_t len);
 
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
@@ -406,6 +411,7 @@ struct rte_mempool_ops {
 	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
 	rte_mempool_get_count get_count; /**< Get qty of available objs. */
 	rte_mempool_get_capabilities_t get_capabilities; /**< Get capability */
+	rte_mempool_update_range_t update_range; /**< Update range to mempool */
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -531,6 +537,22 @@ int
 rte_mempool_ops_get_capabilities(struct rte_mempool *mp);
 
 /**
+ * @internal wrapper for mempool_ops update_range callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param vaddr
+ *   Pointer to the buffer virtual address
+ * @param paddr
+ *   Pointer to the buffer physical address
+ * @param len
+ *   Pool size
+ */
+void
+rte_mempool_ops_update_range(const struct rte_mempool *mp,
+				char *vaddr, phys_addr_t paddr, size_t len);
+
+/**
  * @internal wrapper for mempool_ops free callback.
  *
  * @param mp
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 31a73cc9a..7bb52b3ca 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -87,6 +87,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->dequeue = h->dequeue;
 	ops->get_count = h->get_count;
 	ops->get_capabilities = h->get_capabilities;
+	ops->update_range = h->update_range;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -137,6 +138,18 @@ rte_mempool_ops_get_capabilities(struct rte_mempool *mp)
 	return ops->get_capabilities(mp);
 }
 
+/* wrapper to update range info to external mempool */
+void
+rte_mempool_ops_update_range(const struct rte_mempool *mp, char *vaddr,
+			     phys_addr_t paddr, size_t len)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+	RTE_FUNC_PTR_OR_RET(ops->update_range);
+	ops->update_range(mp, vaddr, paddr, len);
+}
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 3c3471507..2663001c3 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -46,5 +46,6 @@ DPDK_17.11 {
 	global:
 
 	rte_mempool_ops_get_capabilities;
+	rte_mempool_ops_update_range;
 
 } DPDK_16.07;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* Re: [PATCH v4 1/7] mempool: fix flags data type
  2017-08-15  6:07       ` [PATCH v4 1/7] mempool: fix flags data type Santosh Shukla
@ 2017-09-04 14:11         ` Olivier MATZ
  2017-09-04 14:18           ` santosh
  0 siblings, 1 reply; 116+ messages in thread
From: Olivier MATZ @ 2017-09-04 14:11 UTC (permalink / raw)
  To: Santosh Shukla; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Tue, Aug 15, 2017 at 11:37:37AM +0530, Santosh Shukla wrote:
> mp->flags is int and mempool API updates unsigned int
> value in 'flags', so fix the 'flags' data type.
> 
> Patch also does mp->flags cleanup like:
> * Remove redundant 'flags' API description from
>   - __rte_mempool_generic_put
>   - __rte_mempool_generic_get
> 
> * Remove unused 'flags' param from
>   - rte_mempool_generic_put
>   - rte_mempool_generic_get
> 
> * Fix mempool var data types int mempool.c
>   - mz_flags is int, Change it to unsigned int.

This bullet list makes me think that we should have at least 2 commits:
 mempool: remove unused flags argument
 mempool: change flags from int to unsigned int

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v4 1/7] mempool: fix flags data type
  2017-09-04 14:11         ` Olivier MATZ
@ 2017-09-04 14:18           ` santosh
  0 siblings, 0 replies; 116+ messages in thread
From: santosh @ 2017-09-04 14:18 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev, thomas, jerin.jacob, hemant.agrawal



On Monday 04 September 2017 07:41 PM, Olivier MATZ wrote:
> On Tue, Aug 15, 2017 at 11:37:37AM +0530, Santosh Shukla wrote:
>> mp->flags is int and mempool API updates unsigned int
>> value in 'flags', so fix the 'flags' data type.
>>
>> Patch also does mp->flags cleanup like:
>> * Remove redundant 'flags' API description from
>>   - __rte_mempool_generic_put
>>   - __rte_mempool_generic_get
>>
>> * Remove unused 'flags' param from
>>   - rte_mempool_generic_put
>>   - rte_mempool_generic_get
>>
>> * Fix mempool var data types int mempool.c
>>   - mz_flags is int, Change it to unsigned int.
> This bullet list makes me think that we should have at least 2 commits:
>  mempool: remove unused flags argument
>  mempool: change flags from int to unsigned int
>
Ok, will split in v5.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v4 2/7] mempool: add mempool arg in xmem size and usage
  2017-08-15  6:07       ` [PATCH v4 2/7] mempool: add mempool arg in xmem size and usage Santosh Shukla
@ 2017-09-04 14:22         ` Olivier MATZ
  2017-09-04 14:33           ` santosh
  0 siblings, 1 reply; 116+ messages in thread
From: Olivier MATZ @ 2017-09-04 14:22 UTC (permalink / raw)
  To: Santosh Shukla; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Tue, Aug 15, 2017 at 11:37:38AM +0530, Santosh Shukla wrote:
> xmem_size and xmem_usage need to know the status of mp->flag.
> Following patch will make use of that.
> 
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> ---
>  drivers/net/xenvirt/rte_mempool_gntalloc.c |  5 +++--
>  lib/librte_mempool/rte_mempool.c           | 10 ++++++----
>  lib/librte_mempool/rte_mempool.h           |  8 ++++++--
>  test/test/test_mempool.c                   |  4 ++--
>  4 files changed, 17 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/net/xenvirt/rte_mempool_gntalloc.c b/drivers/net/xenvirt/rte_mempool_gntalloc.c
> index 73e82f808..ee0bda459 100644
> --- a/drivers/net/xenvirt/rte_mempool_gntalloc.c
> +++ b/drivers/net/xenvirt/rte_mempool_gntalloc.c
> @@ -114,7 +114,7 @@ _create_mempool(const char *name, unsigned elt_num, unsigned elt_size,
>  	pg_shift = rte_bsf32(pg_sz);
>  
>  	rte_mempool_calc_obj_size(elt_size, flags, &objsz);
> -	sz = rte_mempool_xmem_size(elt_num, objsz.total_size, pg_shift);
> +	sz = rte_mempool_xmem_size(elt_num, objsz.total_size, pg_shift, NULL);
>  	pg_num = sz >> pg_shift;
>  
>  	pa_arr = calloc(pg_num, sizeof(pa_arr[0]));

What is the meaning of passing NULL to rte_mempool_xmem_size()?
Does it mean that flags are ignored?

Wouldn't it be better to pass the mempool flags instead of the mempool
pointer?

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v4 4/7] mempool: get the mempool capability
  2017-08-15  6:07       ` [PATCH v4 4/7] mempool: get the mempool capability Santosh Shukla
@ 2017-09-04 14:32         ` Olivier MATZ
  2017-09-04 14:44           ` santosh
  0 siblings, 1 reply; 116+ messages in thread
From: Olivier MATZ @ 2017-09-04 14:32 UTC (permalink / raw)
  To: Santosh Shukla; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Tue, Aug 15, 2017 at 11:37:40AM +0530, Santosh Shukla wrote:
> Allow mempool to advertise its capability.
> A handler been introduced called rte_mempool_ops_get_capabilities.
> - Upon ->get_capabilities call, mempool driver will advertise
> capability by updating to 'mp->flags'.
> 
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> ---
>  lib/librte_mempool/rte_mempool.c           |  5 +++++
>  lib/librte_mempool/rte_mempool.h           | 20 ++++++++++++++++++++
>  lib/librte_mempool/rte_mempool_ops.c       | 14 ++++++++++++++
>  lib/librte_mempool/rte_mempool_version.map |  7 +++++++
>  4 files changed, 46 insertions(+)
> 
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index f95c01c00..d518c53de 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -529,6 +529,11 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>  	if (mp->nb_mem_chunks != 0)
>  		return -EEXIST;
>  
> +	/* Get mempool capability */
> +	ret = rte_mempool_ops_get_capabilities(mp);
> +	if (ret)
> +		RTE_LOG(DEBUG, MEMPOOL, "get_capability not supported for %s\n", mp->name);
> +

there is probably a checkpatch error here (80 cols)

> +/**
> + * @internal wrapper for mempool_ops get_capabilities callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @return
> + *   - 0: Success; Capability updated to mp->flags
> + *   - <0: Error; code of capability function.
> + */
> +int
> +rte_mempool_ops_get_capabilities(struct rte_mempool *mp);
> +

What does "Capability updated to mp->flags" mean?

Why not having instead:
 int rte_mempool_ops_get_capabilities(struct rte_mempool *mp,
     unsigned int *flags);

?

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v4 2/7] mempool: add mempool arg in xmem size and usage
  2017-09-04 14:22         ` Olivier MATZ
@ 2017-09-04 14:33           ` santosh
  2017-09-04 14:46             ` Olivier MATZ
  0 siblings, 1 reply; 116+ messages in thread
From: santosh @ 2017-09-04 14:33 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev, thomas, jerin.jacob, hemant.agrawal



On Monday 04 September 2017 07:52 PM, Olivier MATZ wrote:
> On Tue, Aug 15, 2017 at 11:37:38AM +0530, Santosh Shukla wrote:
>> xmem_size and xmem_usage need to know the status of mp->flag.
>> Following patch will make use of that.
>>
>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>> ---
>>  drivers/net/xenvirt/rte_mempool_gntalloc.c |  5 +++--
>>  lib/librte_mempool/rte_mempool.c           | 10 ++++++----
>>  lib/librte_mempool/rte_mempool.h           |  8 ++++++--
>>  test/test/test_mempool.c                   |  4 ++--
>>  4 files changed, 17 insertions(+), 10 deletions(-)
>>
>> diff --git a/drivers/net/xenvirt/rte_mempool_gntalloc.c b/drivers/net/xenvirt/rte_mempool_gntalloc.c
>> index 73e82f808..ee0bda459 100644
>> --- a/drivers/net/xenvirt/rte_mempool_gntalloc.c
>> +++ b/drivers/net/xenvirt/rte_mempool_gntalloc.c
>> @@ -114,7 +114,7 @@ _create_mempool(const char *name, unsigned elt_num, unsigned elt_size,
>>  	pg_shift = rte_bsf32(pg_sz);
>>  
>>  	rte_mempool_calc_obj_size(elt_size, flags, &objsz);
>> -	sz = rte_mempool_xmem_size(elt_num, objsz.total_size, pg_shift);
>> +	sz = rte_mempool_xmem_size(elt_num, objsz.total_size, pg_shift, NULL);
>>  	pg_num = sz >> pg_shift;
>>  
>>  	pa_arr = calloc(pg_num, sizeof(pa_arr[0]));
> What is the meaning of passing NULL to rte_mempool_xmem_size()?
> Does it mean that flags are ignored?

Yes that mean flags are ignored.

> Wouldn't it be better to pass the mempool flags instead of the mempool
> pointer?

Keeping mempool as param rather flag useful in case user want to do/refer more
thing in future for xmem_size/usage() api. Otherwise he has append one more param
to api and send out deprecation notice.. Btw, its const param so won;t hurt right?

However if you still want to restrict param to mp->flags then pl. suggest.

Thanks. 

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v4 5/7] mempool: detect physical contiguous object in pool
  2017-08-15  6:07       ` [PATCH v4 5/7] mempool: detect physical contiguous object in pool Santosh Shukla
@ 2017-09-04 14:43         ` Olivier MATZ
  2017-09-04 14:47           ` santosh
  0 siblings, 1 reply; 116+ messages in thread
From: Olivier MATZ @ 2017-09-04 14:43 UTC (permalink / raw)
  To: Santosh Shukla; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Tue, Aug 15, 2017 at 11:37:41AM +0530, Santosh Shukla wrote:
> The memory area containing all the objects must be physically
> contiguous.
> Introducing MEMPOOL_F_CAPA_PHYS_CONTIG flag for such use-case.
> 
> The flag useful to detect whether pool area has sufficient space
> to fit all objects. If not then return -ENOSPC.
> This way, we make sure that all object within a pool is contiguous.
> 
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> ---
>  lib/librte_mempool/rte_mempool.c | 8 ++++++++
>  lib/librte_mempool/rte_mempool.h | 1 +
>  2 files changed, 9 insertions(+)
> 
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index d518c53de..19e5e6ddf 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -370,6 +370,14 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
>  
>  	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
>  
> +	/* Detect pool area has sufficient space for elements */
> +	if (mp->flags & MEMPOOL_F_CAPA_PHYS_CONTIG) {
> +		if (len < total_elt_sz * mp->size) {
> +			RTE_LOG(ERR, MEMPOOL, "pool area %" PRIx64 " not enough\n", (uint64_t)len);
> +			return -ENOSPC;
> +		}
> +	}
> +
>  	memhdr = rte_zmalloc("MEMPOOL_MEMHDR", sizeof(*memhdr), 0);
>  	if (memhdr == NULL)
>  		return -ENOMEM;
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index bc4a1dac7..a4bfdb56e 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -265,6 +265,7 @@ struct rte_mempool {
>  #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
>  #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
>  #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
> +#define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040 /**< Detect physcially contiguous objs */
>  

The description should be longer. It is impossible to understand what is the
meaning of this capability flag by just reading the comment.

Example:
/**
 * This capability flag is advertised by a mempool handler if the whole
 * memory area containing the objects must be physically contiguous.
 */

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v4 4/7] mempool: get the mempool capability
  2017-09-04 14:32         ` Olivier MATZ
@ 2017-09-04 14:44           ` santosh
  2017-09-04 15:56             ` Olivier MATZ
  0 siblings, 1 reply; 116+ messages in thread
From: santosh @ 2017-09-04 14:44 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

Hi Olivier,


On Monday 04 September 2017 08:02 PM, Olivier MATZ wrote:
> On Tue, Aug 15, 2017 at 11:37:40AM +0530, Santosh Shukla wrote:
>> Allow mempool to advertise its capability.
>> A handler been introduced called rte_mempool_ops_get_capabilities.
>> - Upon ->get_capabilities call, mempool driver will advertise
>> capability by updating to 'mp->flags'.
>>
>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>> ---
>>  lib/librte_mempool/rte_mempool.c           |  5 +++++
>>  lib/librte_mempool/rte_mempool.h           | 20 ++++++++++++++++++++
>>  lib/librte_mempool/rte_mempool_ops.c       | 14 ++++++++++++++
>>  lib/librte_mempool/rte_mempool_version.map |  7 +++++++
>>  4 files changed, 46 insertions(+)
>>
>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
>> index f95c01c00..d518c53de 100644
>> --- a/lib/librte_mempool/rte_mempool.c
>> +++ b/lib/librte_mempool/rte_mempool.c
>> @@ -529,6 +529,11 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>>  	if (mp->nb_mem_chunks != 0)
>>  		return -EEXIST;
>>  
>> +	/* Get mempool capability */
>> +	ret = rte_mempool_ops_get_capabilities(mp);
>> +	if (ret)
>> +		RTE_LOG(DEBUG, MEMPOOL, "get_capability not supported for %s\n", mp->name);
>> +
> there is probably a checkpatch error here (80 cols)

for debug, line over 80 char warning acceptable, right?
anyways, I will reduce verbose to less than 80 in v5.

>> +/**
>> + * @internal wrapper for mempool_ops get_capabilities callback.
>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + * @return
>> + *   - 0: Success; Capability updated to mp->flags
>> + *   - <0: Error; code of capability function.
>> + */
>> +int
>> +rte_mempool_ops_get_capabilities(struct rte_mempool *mp);
>> +
> What does "Capability updated to mp->flags" mean?

it says that external mempool driver has updated his pool capability in mp->flags.
I'll reword in v5.

> Why not having instead:
>  int rte_mempool_ops_get_capabilities(struct rte_mempool *mp,
>      unsigned int *flags);
>
> ?

No strong opinion, But Since we already passing mempool as param why not update
flag info into mp->flag.
However I see your, I guess you want explicitly highlight flag as capability update {action}
in second param, in that case how about keeping first mempool param 'const' like below:

int rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
     unsigned int *flags);

are you ok with const change in above API.

queued for v5 after you ack/nack on above const change.

Thanks. 

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v4 2/7] mempool: add mempool arg in xmem size and usage
  2017-09-04 14:33           ` santosh
@ 2017-09-04 14:46             ` Olivier MATZ
  2017-09-04 14:58               ` santosh
  0 siblings, 1 reply; 116+ messages in thread
From: Olivier MATZ @ 2017-09-04 14:46 UTC (permalink / raw)
  To: santosh; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Mon, Sep 04, 2017 at 08:03:53PM +0530, santosh wrote:
> 
> 
> On Monday 04 September 2017 07:52 PM, Olivier MATZ wrote:
> > On Tue, Aug 15, 2017 at 11:37:38AM +0530, Santosh Shukla wrote:
> >> xmem_size and xmem_usage need to know the status of mp->flag.
> >> Following patch will make use of that.
> >>
> >> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> >> ---
> >>  drivers/net/xenvirt/rte_mempool_gntalloc.c |  5 +++--
> >>  lib/librte_mempool/rte_mempool.c           | 10 ++++++----
> >>  lib/librte_mempool/rte_mempool.h           |  8 ++++++--
> >>  test/test/test_mempool.c                   |  4 ++--
> >>  4 files changed, 17 insertions(+), 10 deletions(-)
> >>
> >> diff --git a/drivers/net/xenvirt/rte_mempool_gntalloc.c b/drivers/net/xenvirt/rte_mempool_gntalloc.c
> >> index 73e82f808..ee0bda459 100644
> >> --- a/drivers/net/xenvirt/rte_mempool_gntalloc.c
> >> +++ b/drivers/net/xenvirt/rte_mempool_gntalloc.c
> >> @@ -114,7 +114,7 @@ _create_mempool(const char *name, unsigned elt_num, unsigned elt_size,
> >>  	pg_shift = rte_bsf32(pg_sz);
> >>  
> >>  	rte_mempool_calc_obj_size(elt_size, flags, &objsz);
> >> -	sz = rte_mempool_xmem_size(elt_num, objsz.total_size, pg_shift);
> >> +	sz = rte_mempool_xmem_size(elt_num, objsz.total_size, pg_shift, NULL);
> >>  	pg_num = sz >> pg_shift;
> >>  
> >>  	pa_arr = calloc(pg_num, sizeof(pa_arr[0]));
> > What is the meaning of passing NULL to rte_mempool_xmem_size()?
> > Does it mean that flags are ignored?
> 
> Yes that mean flags are ignored.

But the flags change the return value of rte_mempool_xmem_size(), right?
So, correct me if I'm wrong, but if we don't pass the proper flags, the
returned value won't be the one we expect.

> 
> > Wouldn't it be better to pass the mempool flags instead of the mempool
> > pointer?
> 
> Keeping mempool as param rather flag useful in case user want to do/refer more
> thing in future for xmem_size/usage() api. Otherwise he has append one more param
> to api and send out deprecation notice.. Btw, its const param so won;t hurt right?
> 
> However if you still want to restrict param to mp->flags then pl. suggest.
> 
> Thanks. 
> 
> 

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v4 5/7] mempool: detect physical contiguous object in pool
  2017-09-04 14:43         ` Olivier MATZ
@ 2017-09-04 14:47           ` santosh
  2017-09-04 16:00             ` Olivier MATZ
  0 siblings, 1 reply; 116+ messages in thread
From: santosh @ 2017-09-04 14:47 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev, thomas, jerin.jacob, hemant.agrawal


On Monday 04 September 2017 08:13 PM, Olivier MATZ wrote:
> On Tue, Aug 15, 2017 at 11:37:41AM +0530, Santosh Shukla wrote:
>> The memory area containing all the objects must be physically
>> contiguous.
>> Introducing MEMPOOL_F_CAPA_PHYS_CONTIG flag for such use-case.
>>
>> The flag useful to detect whether pool area has sufficient space
>> to fit all objects. If not then return -ENOSPC.
>> This way, we make sure that all object within a pool is contiguous.
>>
>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>> ---
>>  lib/librte_mempool/rte_mempool.c | 8 ++++++++
>>  lib/librte_mempool/rte_mempool.h | 1 +
>>  2 files changed, 9 insertions(+)
>>
>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
>> index d518c53de..19e5e6ddf 100644
>> --- a/lib/librte_mempool/rte_mempool.c
>> +++ b/lib/librte_mempool/rte_mempool.c
>> @@ -370,6 +370,14 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
>>  
>>  	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
>>  
>> +	/* Detect pool area has sufficient space for elements */
>> +	if (mp->flags & MEMPOOL_F_CAPA_PHYS_CONTIG) {
>> +		if (len < total_elt_sz * mp->size) {
>> +			RTE_LOG(ERR, MEMPOOL, "pool area %" PRIx64 " not enough\n", (uint64_t)len);
>> +			return -ENOSPC;
>> +		}
>> +	}
>> +
>>  	memhdr = rte_zmalloc("MEMPOOL_MEMHDR", sizeof(*memhdr), 0);
>>  	if (memhdr == NULL)
>>  		return -ENOMEM;
>> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
>> index bc4a1dac7..a4bfdb56e 100644
>> --- a/lib/librte_mempool/rte_mempool.h
>> +++ b/lib/librte_mempool/rte_mempool.h
>> @@ -265,6 +265,7 @@ struct rte_mempool {
>>  #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
>>  #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
>>  #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
>> +#define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040 /**< Detect physcially contiguous objs */
>>  
> The description should be longer. It is impossible to understand what is the
> meaning of this capability flag by just reading the comment.
>
> Example:
> /**
>  * This capability flag is advertised by a mempool handler if the whole
>  * memory area containing the objects must be physically contiguous.
>  */

in v5, Thanks.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v4 2/7] mempool: add mempool arg in xmem size and usage
  2017-09-04 14:46             ` Olivier MATZ
@ 2017-09-04 14:58               ` santosh
  2017-09-04 15:23                 ` Olivier MATZ
  0 siblings, 1 reply; 116+ messages in thread
From: santosh @ 2017-09-04 14:58 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev, thomas, jerin.jacob, hemant.agrawal


On Monday 04 September 2017 08:16 PM, Olivier MATZ wrote:
> On Mon, Sep 04, 2017 at 08:03:53PM +0530, santosh wrote:
>>
>> On Monday 04 September 2017 07:52 PM, Olivier MATZ wrote:
>>> On Tue, Aug 15, 2017 at 11:37:38AM +0530, Santosh Shukla wrote:
>>>> xmem_size and xmem_usage need to know the status of mp->flag.
>>>> Following patch will make use of that.
>>>>
>>>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>>>> ---
>>>>  drivers/net/xenvirt/rte_mempool_gntalloc.c |  5 +++--
>>>>  lib/librte_mempool/rte_mempool.c           | 10 ++++++----
>>>>  lib/librte_mempool/rte_mempool.h           |  8 ++++++--
>>>>  test/test/test_mempool.c                   |  4 ++--
>>>>  4 files changed, 17 insertions(+), 10 deletions(-)
>>>>
>>>> diff --git a/drivers/net/xenvirt/rte_mempool_gntalloc.c b/drivers/net/xenvirt/rte_mempool_gntalloc.c
>>>> index 73e82f808..ee0bda459 100644
>>>> --- a/drivers/net/xenvirt/rte_mempool_gntalloc.c
>>>> +++ b/drivers/net/xenvirt/rte_mempool_gntalloc.c
>>>> @@ -114,7 +114,7 @@ _create_mempool(const char *name, unsigned elt_num, unsigned elt_size,
>>>>  	pg_shift = rte_bsf32(pg_sz);
>>>>  
>>>>  	rte_mempool_calc_obj_size(elt_size, flags, &objsz);
>>>> -	sz = rte_mempool_xmem_size(elt_num, objsz.total_size, pg_shift);
>>>> +	sz = rte_mempool_xmem_size(elt_num, objsz.total_size, pg_shift, NULL);
>>>>  	pg_num = sz >> pg_shift;
>>>>  
>>>>  	pa_arr = calloc(pg_num, sizeof(pa_arr[0]));
>>> What is the meaning of passing NULL to rte_mempool_xmem_size()?
>>> Does it mean that flags are ignored?
>> Yes that mean flags are ignored.
> But the flags change the return value of rte_mempool_xmem_size(), right?

no, It won't change.

> So, correct me if I'm wrong, but if we don't pass the proper flags, the
> returned value won't be the one we expect.

passing flag value other than MEMPOOL_F_POOL_BLK_SZ_ALIGNED, wont impact return value.

>>> Wouldn't it be better to pass the mempool flags instead of the mempool
>>> pointer?
>> Keeping mempool as param rather flag useful in case user want to do/refer more
>> thing in future for xmem_size/usage() api. Otherwise he has append one more param
>> to api and send out deprecation notice.. Btw, its const param so won;t hurt right?
>>
>> However if you still want to restrict param to mp->flags then pl. suggest.
>>
>> Thanks. 
>>
>>

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v4 2/7] mempool: add mempool arg in xmem size and usage
  2017-09-04 14:58               ` santosh
@ 2017-09-04 15:23                 ` Olivier MATZ
  2017-09-04 15:52                   ` santosh
  0 siblings, 1 reply; 116+ messages in thread
From: Olivier MATZ @ 2017-09-04 15:23 UTC (permalink / raw)
  To: santosh; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Mon, Sep 04, 2017 at 08:28:36PM +0530, santosh wrote:
> 
> On Monday 04 September 2017 08:16 PM, Olivier MATZ wrote:
> > On Mon, Sep 04, 2017 at 08:03:53PM +0530, santosh wrote:
> >>
> >> On Monday 04 September 2017 07:52 PM, Olivier MATZ wrote:
> >>> On Tue, Aug 15, 2017 at 11:37:38AM +0530, Santosh Shukla wrote:
> >>>> xmem_size and xmem_usage need to know the status of mp->flag.
> >>>> Following patch will make use of that.
> >>>>
> >>>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> >>>> ---
> >>>>  drivers/net/xenvirt/rte_mempool_gntalloc.c |  5 +++--
> >>>>  lib/librte_mempool/rte_mempool.c           | 10 ++++++----
> >>>>  lib/librte_mempool/rte_mempool.h           |  8 ++++++--
> >>>>  test/test/test_mempool.c                   |  4 ++--
> >>>>  4 files changed, 17 insertions(+), 10 deletions(-)
> >>>>
> >>>> diff --git a/drivers/net/xenvirt/rte_mempool_gntalloc.c b/drivers/net/xenvirt/rte_mempool_gntalloc.c
> >>>> index 73e82f808..ee0bda459 100644
> >>>> --- a/drivers/net/xenvirt/rte_mempool_gntalloc.c
> >>>> +++ b/drivers/net/xenvirt/rte_mempool_gntalloc.c
> >>>> @@ -114,7 +114,7 @@ _create_mempool(const char *name, unsigned elt_num, unsigned elt_size,
> >>>>  	pg_shift = rte_bsf32(pg_sz);
> >>>>  
> >>>>  	rte_mempool_calc_obj_size(elt_size, flags, &objsz);
> >>>> -	sz = rte_mempool_xmem_size(elt_num, objsz.total_size, pg_shift);
> >>>> +	sz = rte_mempool_xmem_size(elt_num, objsz.total_size, pg_shift, NULL);
> >>>>  	pg_num = sz >> pg_shift;
> >>>>  
> >>>>  	pa_arr = calloc(pg_num, sizeof(pa_arr[0]));
> >>> What is the meaning of passing NULL to rte_mempool_xmem_size()?
> >>> Does it mean that flags are ignored?
> >> Yes that mean flags are ignored.
> > But the flags change the return value of rte_mempool_xmem_size(), right?
> 
> no, It won't change.
> 
> > So, correct me if I'm wrong, but if we don't pass the proper flags, the
> > returned value won't be the one we expect.
> 
> passing flag value other than MEMPOOL_F_POOL_BLK_SZ_ALIGNED, wont impact return value.

That's the case today with your patches.

But if someone else wants to add another flag, this may change.
And you do not describe in the help that mp can be NULL, why it would
occur, and what does that mean.

> >>> Wouldn't it be better to pass the mempool flags instead of the mempool
> >>> pointer?
> >> Keeping mempool as param rather flag useful in case user want to do/refer more
> >> thing in future for xmem_size/usage() api. Otherwise he has append one more param
> >> to api and send out deprecation notice.. Btw, its const param so won;t hurt right?
> >>
> >> However if you still want to restrict param to mp->flags then pl. suggest.

Yes, it looks better to pass the flags instead of the mempool pointer.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v4 2/7] mempool: add mempool arg in xmem size and usage
  2017-09-04 15:23                 ` Olivier MATZ
@ 2017-09-04 15:52                   ` santosh
  0 siblings, 0 replies; 116+ messages in thread
From: santosh @ 2017-09-04 15:52 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev, thomas, jerin.jacob, hemant.agrawal


On Monday 04 September 2017 08:53 PM, Olivier MATZ wrote:
> On Mon, Sep 04, 2017 at 08:28:36PM +0530, santosh wrote:
>> On Monday 04 September 2017 08:16 PM, Olivier MATZ wrote:
>>> On Mon, Sep 04, 2017 at 08:03:53PM +0530, santosh wrote:
>>>> On Monday 04 September 2017 07:52 PM, Olivier MATZ wrote:
>>>>> On Tue, Aug 15, 2017 at 11:37:38AM +0530, Santosh Shukla wrote:
>>>>>> xmem_size and xmem_usage need to know the status of mp->flag.
>>>>>> Following patch will make use of that.
>>>>>>
>>>>>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>>>>>> ---
>>>>>>  drivers/net/xenvirt/rte_mempool_gntalloc.c |  5 +++--
>>>>>>  lib/librte_mempool/rte_mempool.c           | 10 ++++++----
>>>>>>  lib/librte_mempool/rte_mempool.h           |  8 ++++++--
>>>>>>  test/test/test_mempool.c                   |  4 ++--
>>>>>>  4 files changed, 17 insertions(+), 10 deletions(-)
>>>>>>
>>>>>> diff --git a/drivers/net/xenvirt/rte_mempool_gntalloc.c b/drivers/net/xenvirt/rte_mempool_gntalloc.c
>>>>>> index 73e82f808..ee0bda459 100644
>>>>>> --- a/drivers/net/xenvirt/rte_mempool_gntalloc.c
>>>>>> +++ b/drivers/net/xenvirt/rte_mempool_gntalloc.c
>>>>>> @@ -114,7 +114,7 @@ _create_mempool(const char *name, unsigned elt_num, unsigned elt_size,
>>>>>>  	pg_shift = rte_bsf32(pg_sz);
>>>>>>  
>>>>>>  	rte_mempool_calc_obj_size(elt_size, flags, &objsz);
>>>>>> -	sz = rte_mempool_xmem_size(elt_num, objsz.total_size, pg_shift);
>>>>>> +	sz = rte_mempool_xmem_size(elt_num, objsz.total_size, pg_shift, NULL);
>>>>>>  	pg_num = sz >> pg_shift;
>>>>>>  
>>>>>>  	pa_arr = calloc(pg_num, sizeof(pa_arr[0]));
>>>>> What is the meaning of passing NULL to rte_mempool_xmem_size()?
>>>>> Does it mean that flags are ignored?
>>>> Yes that mean flags are ignored.
>>> But the flags change the return value of rte_mempool_xmem_size(), right?
>> no, It won't change.
>>
>>> So, correct me if I'm wrong, but if we don't pass the proper flags, the
>>> returned value won't be the one we expect.
>> passing flag value other than MEMPOOL_F_POOL_BLK_SZ_ALIGNED, wont impact return value.
> That's the case today with your patches.
>
> But if someone else wants to add another flag, this may change.

trying to understand your point of view here:
- We have 64B wide flag and if new flag introduced, considering new flag
sets bit in 2^x order then 'if' condition in xmem_size() will fail and elt_num won;t increment
thus won;t impact return type, right?

> And you do not describe in the help that mp can be NULL, why it would
> occur, and what does that mean.

agree, It meant flag ignored in below particular case where upper xen driver API
ie.. __create_mempool() is calling _xmem_size() before pool create (valid case though). 


>>>>> Wouldn't it be better to pass the mempool flags instead of the mempool
>>>>> pointer?
>>>> Keeping mempool as param rather flag useful in case user want to do/refer more
>>>> thing in future for xmem_size/usage() api. Otherwise he has append one more param
>>>> to api and send out deprecation notice.. Btw, its const param so won;t hurt right?
>>>>
>>>> However if you still want to restrict param to mp->flags then pl. suggest.
> Yes, it looks better to pass the flags instead of the mempool pointer.

ok, queued for v5. Thanks.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v4 4/7] mempool: get the mempool capability
  2017-09-04 14:44           ` santosh
@ 2017-09-04 15:56             ` Olivier MATZ
  2017-09-04 16:29               ` santosh
  0 siblings, 1 reply; 116+ messages in thread
From: Olivier MATZ @ 2017-09-04 15:56 UTC (permalink / raw)
  To: santosh; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Mon, Sep 04, 2017 at 08:14:39PM +0530, santosh wrote:
> Hi Olivier,
> 
> 
> On Monday 04 September 2017 08:02 PM, Olivier MATZ wrote:
> > On Tue, Aug 15, 2017 at 11:37:40AM +0530, Santosh Shukla wrote:
> >> Allow mempool to advertise its capability.
> >> A handler been introduced called rte_mempool_ops_get_capabilities.
> >> - Upon ->get_capabilities call, mempool driver will advertise
> >> capability by updating to 'mp->flags'.
> >>
> >> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> >> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> >> ---
> >>  lib/librte_mempool/rte_mempool.c           |  5 +++++
> >>  lib/librte_mempool/rte_mempool.h           | 20 ++++++++++++++++++++
> >>  lib/librte_mempool/rte_mempool_ops.c       | 14 ++++++++++++++
> >>  lib/librte_mempool/rte_mempool_version.map |  7 +++++++
> >>  4 files changed, 46 insertions(+)
> >>
> >> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> >> index f95c01c00..d518c53de 100644
> >> --- a/lib/librte_mempool/rte_mempool.c
> >> +++ b/lib/librte_mempool/rte_mempool.c
> >> @@ -529,6 +529,11 @@ rte_mempool_populate_default(struct rte_mempool *mp)
> >>  	if (mp->nb_mem_chunks != 0)
> >>  		return -EEXIST;
> >>  
> >> +	/* Get mempool capability */
> >> +	ret = rte_mempool_ops_get_capabilities(mp);
> >> +	if (ret)
> >> +		RTE_LOG(DEBUG, MEMPOOL, "get_capability not supported for %s\n", mp->name);
> >> +
> > there is probably a checkpatch error here (80 cols)
> 
> for debug, line over 80 char warning acceptable, right?
> anyways, I will reduce verbose to less than 80 in v5.

What do you mean by "for debug"?

All lines should be shorter than 80 cols, except if that is not
possible without spliting a string or making the code hard to
read or maintain.

> 
> >> +/**
> >> + * @internal wrapper for mempool_ops get_capabilities callback.
> >> + *
> >> + * @param mp
> >> + *   Pointer to the memory pool.
> >> + * @return
> >> + *   - 0: Success; Capability updated to mp->flags
> >> + *   - <0: Error; code of capability function.
> >> + */
> >> +int
> >> +rte_mempool_ops_get_capabilities(struct rte_mempool *mp);
> >> +
> > What does "Capability updated to mp->flags" mean?
> 
> it says that external mempool driver has updated his pool capability in mp->flags.
> I'll reword in v5.

Please, can you explain what does "update" mean?
Is it masked? Or-ed?

> 
> > Why not having instead:
> >  int rte_mempool_ops_get_capabilities(struct rte_mempool *mp,
> >      unsigned int *flags);
> >
> > ?
> 
> No strong opinion, But Since we already passing mempool as param why not update
> flag info into mp->flag.

>From an API perspective, we expect that a function called
"mempool_ops_get_capabilities" returns something.

> However I see your, I guess you want explicitly highlight flag as capability update {action}
> in second param, in that case how about keeping first mempool param 'const' like below:
> 
> int rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
>      unsigned int *flags);
> 
> are you ok with const change in above API.

Yes, adding the const makes sense here.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v4 5/7] mempool: detect physical contiguous object in pool
  2017-09-04 14:47           ` santosh
@ 2017-09-04 16:00             ` Olivier MATZ
  0 siblings, 0 replies; 116+ messages in thread
From: Olivier MATZ @ 2017-09-04 16:00 UTC (permalink / raw)
  To: santosh; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Mon, Sep 04, 2017 at 08:17:11PM +0530, santosh wrote:
> 
> On Monday 04 September 2017 08:13 PM, Olivier MATZ wrote:
> > On Tue, Aug 15, 2017 at 11:37:41AM +0530, Santosh Shukla wrote:
> >> The memory area containing all the objects must be physically
> >> contiguous.
> >> Introducing MEMPOOL_F_CAPA_PHYS_CONTIG flag for such use-case.
> >>
> >> The flag useful to detect whether pool area has sufficient space
> >> to fit all objects. If not then return -ENOSPC.
> >> This way, we make sure that all object within a pool is contiguous.
> >>
> >> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> >> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> >> ---
> >>  lib/librte_mempool/rte_mempool.c | 8 ++++++++
> >>  lib/librte_mempool/rte_mempool.h | 1 +
> >>  2 files changed, 9 insertions(+)
> >>
> >> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> >> index d518c53de..19e5e6ddf 100644
> >> --- a/lib/librte_mempool/rte_mempool.c
> >> +++ b/lib/librte_mempool/rte_mempool.c
> >> @@ -370,6 +370,14 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
> >>  
> >>  	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
> >>  
> >> +	/* Detect pool area has sufficient space for elements */
> >> +	if (mp->flags & MEMPOOL_F_CAPA_PHYS_CONTIG) {
> >> +		if (len < total_elt_sz * mp->size) {
> >> +			RTE_LOG(ERR, MEMPOOL, "pool area %" PRIx64 " not enough\n", (uint64_t)len);
> >> +			return -ENOSPC;
> >> +		}
> >> +	}
> >> +
> >>  	memhdr = rte_zmalloc("MEMPOOL_MEMHDR", sizeof(*memhdr), 0);
> >>  	if (memhdr == NULL)
> >>  		return -ENOMEM;
> >> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> >> index bc4a1dac7..a4bfdb56e 100644
> >> --- a/lib/librte_mempool/rte_mempool.h
> >> +++ b/lib/librte_mempool/rte_mempool.h
> >> @@ -265,6 +265,7 @@ struct rte_mempool {
> >>  #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
> >>  #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
> >>  #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
> >> +#define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040 /**< Detect physcially contiguous objs */
> >>  
> > The description should be longer. It is impossible to understand what is the
> > meaning of this capability flag by just reading the comment.
> >
> > Example:
> > /**
> >  * This capability flag is advertised by a mempool handler if the whole
> >  * memory area containing the objects must be physically contiguous.
> >  */
> 
> in v5, Thanks.
> 
> 

Can you please also add that this flag should not be passed by the application?

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v4 6/7] mempool: introduce block size align flag
  2017-08-15  6:07       ` [PATCH v4 6/7] mempool: introduce block size align flag Santosh Shukla
@ 2017-09-04 16:20         ` Olivier MATZ
  2017-09-04 17:45           ` santosh
  0 siblings, 1 reply; 116+ messages in thread
From: Olivier MATZ @ 2017-09-04 16:20 UTC (permalink / raw)
  To: Santosh Shukla; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Tue, Aug 15, 2017 at 11:37:42AM +0530, Santosh Shukla wrote:
> Some mempool hw like octeontx/fpa block, demands block size
> (/total_elem_sz) aligned object start address.
> 
> Introducing an MEMPOOL_F_POOL_BLK_SZ_ALIGNED flag.
> If this flag is set:
> - Align object start address to a multiple of total_elt_sz.

Please specify if it's virtual or physical address.

What do you think about MEMPOOL_F_BLK_ALIGNED_OBJECTS instead?

I don't really like BLK because the word "block" is not used anywhere
else in the mempool code. But I cannot find any good replacement for
it. If you have another idea, please suggest.

> - Allocate one additional object. Additional object is needed to make
>   sure that requested 'n' object gets correctly populated. Example:
> 
> - Let's say that we get 'x' size of memory chunk from memzone.
> - And application has requested 'n' object from mempool.
> - Ideally, we start using objects at start address 0 to...(x-block_sz)
>   for n obj.
> - Not necessarily first object address i.e. 0 is aligned to block_sz.
> - So we derive 'offset' value for block_sz alignment purpose i.e..'off'.
> - That 'off' makes sure that start address of object is blk_sz
>   aligned.
> - Calculating 'off' may end up sacrificing first block_sz area of
>   memzone area x. So total number of the object which can fit in the
>   pool area is n-1, Which is incorrect behavior.
> 
> Therefore we request one additional object (/block_sz area) from memzone
> when F_BLK_SZ_ALIGNED flag is set.
> 
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> ---
>  lib/librte_mempool/rte_mempool.c | 16 +++++++++++++---
>  lib/librte_mempool/rte_mempool.h |  1 +
>  2 files changed, 14 insertions(+), 3 deletions(-)
> 
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index 19e5e6ddf..7610f0d1f 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -239,10 +239,14 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
>   */
>  size_t
>  rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
> -		      __rte_unused const struct rte_mempool *mp)
> +		      const struct rte_mempool *mp)
>  {
>  	size_t obj_per_page, pg_num, pg_sz;
>  
> +	if (mp && mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED)
> +		/* alignment need one additional object */
> +		elt_num += 1;
> +
>  	if (total_elt_sz == 0)
>  		return 0;

I'm wondering if it's correct if the mempool area is not contiguous.

For instance:
 page size = 4096
 object size = 1900
 elt_num = 10

With your calculation, you will request (11+2-1)/2 = 6 pages.
But actually you may need 10 pages (max), since the number of object per
page matching the alignement constraint is 1, not 2.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v4 4/7] mempool: get the mempool capability
  2017-09-04 15:56             ` Olivier MATZ
@ 2017-09-04 16:29               ` santosh
  0 siblings, 0 replies; 116+ messages in thread
From: santosh @ 2017-09-04 16:29 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev, thomas, jerin.jacob, hemant.agrawal


On Monday 04 September 2017 09:26 PM, Olivier MATZ wrote:
> On Mon, Sep 04, 2017 at 08:14:39PM +0530, santosh wrote:
>> Hi Olivier,
>>
>>
>> On Monday 04 September 2017 08:02 PM, Olivier MATZ wrote:
>>> On Tue, Aug 15, 2017 at 11:37:40AM +0530, Santosh Shukla wrote:
>>>> Allow mempool to advertise its capability.
>>>> A handler been introduced called rte_mempool_ops_get_capabilities.
>>>> - Upon ->get_capabilities call, mempool driver will advertise
>>>> capability by updating to 'mp->flags'.
>>>>
>>>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>>>> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>>>> ---
>>>>  lib/librte_mempool/rte_mempool.c           |  5 +++++
>>>>  lib/librte_mempool/rte_mempool.h           | 20 ++++++++++++++++++++
>>>>  lib/librte_mempool/rte_mempool_ops.c       | 14 ++++++++++++++
>>>>  lib/librte_mempool/rte_mempool_version.map |  7 +++++++
>>>>  4 files changed, 46 insertions(+)
>>>>
>>>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
>>>> index f95c01c00..d518c53de 100644
>>>> --- a/lib/librte_mempool/rte_mempool.c
>>>> +++ b/lib/librte_mempool/rte_mempool.c
>>>> @@ -529,6 +529,11 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>>>>  	if (mp->nb_mem_chunks != 0)
>>>>  		return -EEXIST;
>>>>  
>>>> +	/* Get mempool capability */
>>>> +	ret = rte_mempool_ops_get_capabilities(mp);
>>>> +	if (ret)
>>>> +		RTE_LOG(DEBUG, MEMPOOL, "get_capability not supported for %s\n", mp->name);
>>>> +
>>> there is probably a checkpatch error here (80 cols)
>> for debug, line over 80 char warning acceptable, right?
>> anyways, I will reduce verbose to less than 80 in v5.
> What do you mean by "for debug"?
>
> All lines should be shorter than 80 cols, except if that is not
> possible without spliting a string or making the code hard to
> read or maintain.
>
>>>> +/**
>>>> + * @internal wrapper for mempool_ops get_capabilities callback.
>>>> + *
>>>> + * @param mp
>>>> + *   Pointer to the memory pool.
>>>> + * @return
>>>> + *   - 0: Success; Capability updated to mp->flags
>>>> + *   - <0: Error; code of capability function.
>>>> + */
>>>> +int
>>>> +rte_mempool_ops_get_capabilities(struct rte_mempool *mp);
>>>> +
>>> What does "Capability updated to mp->flags" mean?
>> it says that external mempool driver has updated his pool capability in mp->flags.
>> I'll reword in v5.
> Please, can you explain what does "update" mean?
> Is it masked? Or-ed?

Or-ed.

>>> Why not having instead:
>>>  int rte_mempool_ops_get_capabilities(struct rte_mempool *mp,
>>>      unsigned int *flags);
>>>
>>> ?
>> No strong opinion, But Since we already passing mempool as param why not update
>> flag info into mp->flag.
> From an API perspective, we expect that a function called
> "mempool_ops_get_capabilities" returns something.

Current API return info:
0 : for success ..meaning driver supports capability and advertised same by Or-ing to mp->flags
(now in v5, mempool driver will update to second flag param)
<0 : error.

Is return info fine with you for v5. Pl. confirm. Thanks. 

>> However I see your, I guess you want explicitly highlight flag as capability update {action}
>> in second param, in that case how about keeping first mempool param 'const' like below:
>>
>> int rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
>>      unsigned int *flags);
>>
>> are you ok with const change in above API.
> Yes, adding the const makes sense here.

queued v6, Thanks.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v4 6/7] mempool: introduce block size align flag
  2017-09-04 16:20         ` Olivier MATZ
@ 2017-09-04 17:45           ` santosh
  2017-09-07  7:27             ` Olivier MATZ
  0 siblings, 1 reply; 116+ messages in thread
From: santosh @ 2017-09-04 17:45 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev, thomas, jerin.jacob, hemant.agrawal


On Monday 04 September 2017 09:50 PM, Olivier MATZ wrote:
> On Tue, Aug 15, 2017 at 11:37:42AM +0530, Santosh Shukla wrote:
>> Some mempool hw like octeontx/fpa block, demands block size
>> (/total_elem_sz) aligned object start address.
>>
>> Introducing an MEMPOOL_F_POOL_BLK_SZ_ALIGNED flag.
>> If this flag is set:
>> - Align object start address to a multiple of total_elt_sz.
> Please specify if it's virtual or physical address.

virtual address. Yes will mention in v5. Thanks.

> What do you think about MEMPOOL_F_BLK_ALIGNED_OBJECTS instead?
>
> I don't really like BLK because the word "block" is not used anywhere
> else in the mempool code. But I cannot find any good replacement for
> it. If you have another idea, please suggest.
>
Ok with renaming to MEMPOOL_F_BLK_ALIGNED_OBJECTS

>> - Allocate one additional object. Additional object is needed to make
>>   sure that requested 'n' object gets correctly populated. Example:
>>
>> - Let's say that we get 'x' size of memory chunk from memzone.
>> - And application has requested 'n' object from mempool.
>> - Ideally, we start using objects at start address 0 to...(x-block_sz)
>>   for n obj.
>> - Not necessarily first object address i.e. 0 is aligned to block_sz.
>> - So we derive 'offset' value for block_sz alignment purpose i.e..'off'.
>> - That 'off' makes sure that start address of object is blk_sz
>>   aligned.
>> - Calculating 'off' may end up sacrificing first block_sz area of
>>   memzone area x. So total number of the object which can fit in the
>>   pool area is n-1, Which is incorrect behavior.
>>
>> Therefore we request one additional object (/block_sz area) from memzone
>> when F_BLK_SZ_ALIGNED flag is set.
>>
>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>> ---
>>  lib/librte_mempool/rte_mempool.c | 16 +++++++++++++---
>>  lib/librte_mempool/rte_mempool.h |  1 +
>>  2 files changed, 14 insertions(+), 3 deletions(-)
>>
>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
>> index 19e5e6ddf..7610f0d1f 100644
>> --- a/lib/librte_mempool/rte_mempool.c
>> +++ b/lib/librte_mempool/rte_mempool.c
>> @@ -239,10 +239,14 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
>>   */
>>  size_t
>>  rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
>> -		      __rte_unused const struct rte_mempool *mp)
>> +		      const struct rte_mempool *mp)
>>  {
>>  	size_t obj_per_page, pg_num, pg_sz;
>>  
>> +	if (mp && mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED)
>> +		/* alignment need one additional object */
>> +		elt_num += 1;
>> +
>>  	if (total_elt_sz == 0)
>>  		return 0;
> I'm wondering if it's correct if the mempool area is not contiguous.
>
> For instance:
>  page size = 4096
>  object size = 1900
>  elt_num = 10
>
> With your calculation, you will request (11+2-1)/2 = 6 pages.
> But actually you may need 10 pages (max), since the number of object per
> page matching the alignement constraint is 1, not 2.
>
In our case, we set PMD flag MEMPOOL_F_CAPA_PHYS_CONTIG to detect contiguity,
would fail at pool creation time, as HW don't support.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* [PATCH v5 0/8] Infrastructure to support octeontx HW mempool manager
  2017-08-15  6:07     ` [PATCH v4 0/7] Infrastructure to support octeontx HW mempool manager Santosh Shukla
                         ` (6 preceding siblings ...)
  2017-08-15  6:07       ` [PATCH v4 7/7] mempool: update range info to pool Santosh Shukla
@ 2017-09-06 11:28       ` Santosh Shukla
  2017-09-06 11:28         ` [PATCH v5 1/8] mempool: remove unused flags argument Santosh Shukla
                           ` (8 more replies)
  7 siblings, 9 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-09-06 11:28 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

V5:
Includes v4 review change, suggested by Olivier.

v4:
Include 
- mempool deprecation changes, refer [1], 
- patches are rebased against v17.11-rc0.

v3:
In order to support octeontx HW mempool manager, the common mempool layer must
meet below condition.
- Object start address should be block size (total elem size) aligned.
- Object must have the physically contiguous address within the pool.

And right now mempool doesn't support both.

Patchset adds infrastrucure to support both condition in a _generic_ way.
Proposed solution won't effect existing mempool drivers or its functionality.

Summary:
Introducing capability flag. Now mempool drivers can advertise their
capabilities to common mempool layer(at the pool creation time).
Handlers are introduced in order to support capability flag.

Flags:
* MEMPOOL_F_CAPA_PHYS_CONTIG - If flag is set then Detect whether the object
has the physically contiguous address with in a hugepage.

* MEMPOOL_F_BLK_ALIGNED_OBJECTS - If flag is set then make sure that object
addresses are block size aligned.

API:
Two handles are introduced:
* rte_mempool_ops_get_capability - advertise mempool manager capability.
* rte_mempool_ops_update_range - Update start and end address range to
HW mempool manager.

Change History:
v4 --> v5:
- Replaced mp param with flags param in xmem_size/_usage() api. (Suggested by
  Olivier)
- Renamed flags from MEMPOOL_F_POOL_BLK_SZ_ALIGNED to
  MEMPOOL_F_BLK_ALIGNED_OBJECTS (suggested by Olivier)
- added flag param in get_capabilities() handle (suggested by Olivier)

Refer individual patch for detailed change history.

v3 --> v4:
* [01 - 02 - 03/07] mempool deprecation notice changes.
* [04 - 05 - 06 - 07/07] are v3 patches.

v2 --> v3:
(Note: v3 work is based on deprecation notice [1], It's for 17.11)
* Changed _version.map from 17.08 to 17.11.
* build fixes reported by stv_sys.
* Patchset rebased on upstream commit: da94a999.


v1 --> v2 :
* [01/06] Per deprecation notice [1], Changed rte_mempool 'flag'
  data type from int to unsigned int and removed flag param
  from _xmem_size/usage api.
* [02/06] Incorporated review feedback from v1 [2] (Suggested by Olivier)
* [03/06] Renamed flag to MEMPOOL_F_CAPA_PHYS_CONTIG
  and comment reworded. (Suggested by Olivier per v1 [3])
* [04/06] added new mempool arg in xmem_size/usage. (Suggested by Olivier)
* [05/06] patch description changed.
        - Removed elseif brakcet mix
        - removed sanity check for alignment
        - removed extra var delta
        - Removed __rte_unused from xmem_usage/size and added _BLK_SZ_ALIGN check.
        (Suggeted by Olivier per v1[4])
* [06/06] Added RTE_FUNC_PTR_OR_RET in rte_mempool_ops_update_ops.

Checkpatch status:
CLEAN.

Thanks.

[1] deprecation notice v2: http://dpdk.org/dev/patchwork/patch/27079/
[2] v1: http://dpdk.org/dev/patchwork/patch/25603/
[3] v1: http://dpdk.org/dev/patchwork/patch/25604/
[4] v1: http://dpdk.org/dev/patchwork/patch/25605/

Santosh Shukla (8):
  mempool: remove unused flags argument
  mempool: change flags from int to unsigned int
  mempool: add flags arg in xmem size and usage
  doc: remove mempool notice
  mempool: get the mempool capability
  mempool: detect physical contiguous object in pool
  mempool: introduce block size align flag
  mempool: update range info to pool

 doc/guides/rel_notes/deprecation.rst       |  9 ---
 doc/guides/rel_notes/release_17_11.rst     |  7 +++
 drivers/net/xenvirt/rte_mempool_gntalloc.c |  7 ++-
 lib/librte_mempool/rte_mempool.c           | 47 +++++++++++---
 lib/librte_mempool/rte_mempool.h           | 99 ++++++++++++++++++++++--------
 lib/librte_mempool/rte_mempool_ops.c       | 28 +++++++++
 lib/librte_mempool/rte_mempool_version.map |  8 +++
 test/test/test_mempool.c                   | 25 ++++----
 test/test/test_mempool_perf.c              |  4 +-
 9 files changed, 176 insertions(+), 58 deletions(-)

-- 
2.11.0

^ permalink raw reply	[flat|nested] 116+ messages in thread

* [PATCH v5 1/8] mempool: remove unused flags argument
  2017-09-06 11:28       ` [PATCH v5 0/8] Infrastructure to support octeontx HW mempool manager Santosh Shukla
@ 2017-09-06 11:28         ` Santosh Shukla
  2017-09-07  7:41           ` Olivier MATZ
  2017-09-06 11:28         ` [PATCH v5 2/8] mempool: change flags from int to unsigned int Santosh Shukla
                           ` (7 subsequent siblings)
  8 siblings, 1 reply; 116+ messages in thread
From: Santosh Shukla @ 2017-09-06 11:28 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

* Remove redundant 'flags' API description from
  - __mempool_generic_put
  - __mempool_generic_get
  - rte_mempool_generic_put
  - rte_mempool_generic_get

* Remove unused 'flags' argument from
  - rte_mempool_generic_put
  - rte_mempool_generic_get

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v4 --> v5:
- Splitted v4's [01/07] patch and created specific patch
  for unused flag removal.

 lib/librte_mempool/rte_mempool.h | 31 +++++++++----------------------
 test/test/test_mempool.c         | 18 +++++++++---------
 test/test/test_mempool_perf.c    |  4 ++--
 3 files changed, 20 insertions(+), 33 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 76b5b3b15..ec3884473 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -1034,13 +1034,10 @@ rte_mempool_default_cache(struct rte_mempool *mp, unsigned lcore_id)
  *   positive.
  * @param cache
  *   A pointer to a mempool cache structure. May be NULL if not needed.
- * @param flags
- *   The flags used for the mempool creation.
- *   Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers.
  */
 static __rte_always_inline void
 __mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
-		      unsigned n, struct rte_mempool_cache *cache)
+		      unsigned int n, struct rte_mempool_cache *cache)
 {
 	void **cache_objs;
 
@@ -1096,14 +1093,10 @@ __mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
  *   The number of objects to add in the mempool from the obj_table.
  * @param cache
  *   A pointer to a mempool cache structure. May be NULL if not needed.
- * @param flags
- *   The flags used for the mempool creation.
- *   Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers.
  */
 static __rte_always_inline void
 rte_mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
-			unsigned n, struct rte_mempool_cache *cache,
-			__rte_unused int flags)
+			unsigned int n, struct rte_mempool_cache *cache)
 {
 	__mempool_check_cookies(mp, obj_table, n, 0);
 	__mempool_generic_put(mp, obj_table, n, cache);
@@ -1125,11 +1118,11 @@ rte_mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
  */
 static __rte_always_inline void
 rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
-		     unsigned n)
+		     unsigned int n)
 {
 	struct rte_mempool_cache *cache;
 	cache = rte_mempool_default_cache(mp, rte_lcore_id());
-	rte_mempool_generic_put(mp, obj_table, n, cache, mp->flags);
+	rte_mempool_generic_put(mp, obj_table, n, cache);
 }
 
 /**
@@ -1160,16 +1153,13 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
  *   The number of objects to get, must be strictly positive.
  * @param cache
  *   A pointer to a mempool cache structure. May be NULL if not needed.
- * @param flags
- *   The flags used for the mempool creation.
- *   Single-consumer (MEMPOOL_F_SC_GET flag) or multi-consumers.
  * @return
  *   - >=0: Success; number of objects supplied.
  *   - <0: Error; code of ring dequeue function.
  */
 static __rte_always_inline int
 __mempool_generic_get(struct rte_mempool *mp, void **obj_table,
-		      unsigned n, struct rte_mempool_cache *cache)
+		      unsigned int n, struct rte_mempool_cache *cache)
 {
 	int ret;
 	uint32_t index, len;
@@ -1241,16 +1231,13 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table,
  *   The number of objects to get from mempool to obj_table.
  * @param cache
  *   A pointer to a mempool cache structure. May be NULL if not needed.
- * @param flags
- *   The flags used for the mempool creation.
- *   Single-consumer (MEMPOOL_F_SC_GET flag) or multi-consumers.
  * @return
  *   - 0: Success; objects taken.
  *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
  */
 static __rte_always_inline int
-rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n,
-			struct rte_mempool_cache *cache, __rte_unused int flags)
+rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table,
+			unsigned int n, struct rte_mempool_cache *cache)
 {
 	int ret;
 	ret = __mempool_generic_get(mp, obj_table, n, cache);
@@ -1282,11 +1269,11 @@ rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n,
  *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
  */
 static __rte_always_inline int
-rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
+rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned int n)
 {
 	struct rte_mempool_cache *cache;
 	cache = rte_mempool_default_cache(mp, rte_lcore_id());
-	return rte_mempool_generic_get(mp, obj_table, n, cache, mp->flags);
+	return rte_mempool_generic_get(mp, obj_table, n, cache);
 }
 
 /**
diff --git a/test/test/test_mempool.c b/test/test/test_mempool.c
index 0a4423954..47dc3ac5f 100644
--- a/test/test/test_mempool.c
+++ b/test/test/test_mempool.c
@@ -129,7 +129,7 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 	rte_mempool_dump(stdout, mp);
 
 	printf("get an object\n");
-	if (rte_mempool_generic_get(mp, &obj, 1, cache, 0) < 0)
+	if (rte_mempool_generic_get(mp, &obj, 1, cache) < 0)
 		GOTO_ERR(ret, out);
 	rte_mempool_dump(stdout, mp);
 
@@ -152,21 +152,21 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 #endif
 
 	printf("put the object back\n");
-	rte_mempool_generic_put(mp, &obj, 1, cache, 0);
+	rte_mempool_generic_put(mp, &obj, 1, cache);
 	rte_mempool_dump(stdout, mp);
 
 	printf("get 2 objects\n");
-	if (rte_mempool_generic_get(mp, &obj, 1, cache, 0) < 0)
+	if (rte_mempool_generic_get(mp, &obj, 1, cache) < 0)
 		GOTO_ERR(ret, out);
-	if (rte_mempool_generic_get(mp, &obj2, 1, cache, 0) < 0) {
-		rte_mempool_generic_put(mp, &obj, 1, cache, 0);
+	if (rte_mempool_generic_get(mp, &obj2, 1, cache) < 0) {
+		rte_mempool_generic_put(mp, &obj, 1, cache);
 		GOTO_ERR(ret, out);
 	}
 	rte_mempool_dump(stdout, mp);
 
 	printf("put the objects back\n");
-	rte_mempool_generic_put(mp, &obj, 1, cache, 0);
-	rte_mempool_generic_put(mp, &obj2, 1, cache, 0);
+	rte_mempool_generic_put(mp, &obj, 1, cache);
+	rte_mempool_generic_put(mp, &obj2, 1, cache);
 	rte_mempool_dump(stdout, mp);
 
 	/*
@@ -178,7 +178,7 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 		GOTO_ERR(ret, out);
 
 	for (i = 0; i < MEMPOOL_SIZE; i++) {
-		if (rte_mempool_generic_get(mp, &objtable[i], 1, cache, 0) < 0)
+		if (rte_mempool_generic_get(mp, &objtable[i], 1, cache) < 0)
 			break;
 	}
 
@@ -200,7 +200,7 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 				ret = -1;
 		}
 
-		rte_mempool_generic_put(mp, &objtable[i], 1, cache, 0);
+		rte_mempool_generic_put(mp, &objtable[i], 1, cache);
 	}
 
 	free(objtable);
diff --git a/test/test/test_mempool_perf.c b/test/test/test_mempool_perf.c
index 07b28c066..3b8f7de7c 100644
--- a/test/test/test_mempool_perf.c
+++ b/test/test/test_mempool_perf.c
@@ -186,7 +186,7 @@ per_lcore_mempool_test(void *arg)
 				ret = rte_mempool_generic_get(mp,
 							      &obj_table[idx],
 							      n_get_bulk,
-							      cache, 0);
+							      cache);
 				if (unlikely(ret < 0)) {
 					rte_mempool_dump(stdout, mp);
 					/* in this case, objects are lost... */
@@ -200,7 +200,7 @@ per_lcore_mempool_test(void *arg)
 			while (idx < n_keep) {
 				rte_mempool_generic_put(mp, &obj_table[idx],
 							n_put_bulk,
-							cache, 0);
+							cache);
 				idx += n_put_bulk;
 			}
 		}
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v5 2/8] mempool: change flags from int to unsigned int
  2017-09-06 11:28       ` [PATCH v5 0/8] Infrastructure to support octeontx HW mempool manager Santosh Shukla
  2017-09-06 11:28         ` [PATCH v5 1/8] mempool: remove unused flags argument Santosh Shukla
@ 2017-09-06 11:28         ` Santosh Shukla
  2017-09-07  7:43           ` Olivier MATZ
  2017-09-06 11:28         ` [PATCH v5 3/8] mempool: add flags arg in xmem size and usage Santosh Shukla
                           ` (6 subsequent siblings)
  8 siblings, 1 reply; 116+ messages in thread
From: Santosh Shukla @ 2017-09-06 11:28 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

mp->flags is int and mempool API writes unsigned int
value in 'flags', so fix the 'flags' data type.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v4 --> v5:
- Splitted v4 [01/07] to this as second patch.

 lib/librte_mempool/rte_mempool.c | 4 ++--
 lib/librte_mempool/rte_mempool.h | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 6fc3c9c7c..237665c65 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -515,7 +515,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 int
 rte_mempool_populate_default(struct rte_mempool *mp)
 {
-	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
+	unsigned int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	const struct rte_memzone *mz;
 	size_t size, total_elt_sz, align, pg_sz, pg_shift;
@@ -742,7 +742,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 	struct rte_tailq_entry *te = NULL;
 	const struct rte_memzone *mz = NULL;
 	size_t mempool_size;
-	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
+	unsigned int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
 	struct rte_mempool_objsz objsz;
 	unsigned lcore_id;
 	int ret;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index ec3884473..bf65d62fe 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -226,7 +226,7 @@ struct rte_mempool {
 	};
 	void *pool_config;               /**< optional args for ops alloc. */
 	const struct rte_memzone *mz;    /**< Memzone where pool is alloc'd. */
-	int flags;                       /**< Flags of the mempool. */
+	unsigned int flags;              /**< Flags of the mempool. */
 	int socket_id;                   /**< Socket id passed at create. */
 	uint32_t size;                   /**< Max size of the mempool. */
 	uint32_t cache_size;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v5 3/8] mempool: add flags arg in xmem size and usage
  2017-09-06 11:28       ` [PATCH v5 0/8] Infrastructure to support octeontx HW mempool manager Santosh Shukla
  2017-09-06 11:28         ` [PATCH v5 1/8] mempool: remove unused flags argument Santosh Shukla
  2017-09-06 11:28         ` [PATCH v5 2/8] mempool: change flags from int to unsigned int Santosh Shukla
@ 2017-09-06 11:28         ` Santosh Shukla
  2017-09-07  7:46           ` Olivier MATZ
  2017-09-06 11:28         ` [PATCH v5 4/8] doc: remove mempool notice Santosh Shukla
                           ` (5 subsequent siblings)
  8 siblings, 1 reply; 116+ messages in thread
From: Santosh Shukla @ 2017-09-06 11:28 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

xmem_size and xmem_usage need to know the status of mempool flags,
so add 'flags' arg in _xmem_size/usage() api.

Following patch will make use of that.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v4 --> v5:
- Removed 'mp' param and replaced with 'flags' param for
  xmem_size/_usage api. (suggested by Olivier)
Refer [1].
[1] http://dpdk.org/dev/patchwork/patch/27596/

 drivers/net/xenvirt/rte_mempool_gntalloc.c |  7 ++++---
 lib/librte_mempool/rte_mempool.c           | 11 +++++++----
 lib/librte_mempool/rte_mempool.h           |  8 ++++++--
 test/test/test_mempool.c                   |  7 ++++---
 4 files changed, 21 insertions(+), 12 deletions(-)

diff --git a/drivers/net/xenvirt/rte_mempool_gntalloc.c b/drivers/net/xenvirt/rte_mempool_gntalloc.c
index 73e82f808..7f7aecdc1 100644
--- a/drivers/net/xenvirt/rte_mempool_gntalloc.c
+++ b/drivers/net/xenvirt/rte_mempool_gntalloc.c
@@ -79,7 +79,7 @@ _create_mempool(const char *name, unsigned elt_num, unsigned elt_size,
 		   unsigned cache_size, unsigned private_data_size,
 		   rte_mempool_ctor_t *mp_init, void *mp_init_arg,
 		   rte_mempool_obj_cb_t *obj_init, void *obj_init_arg,
-		   int socket_id, unsigned flags)
+		   int socket_id, unsigned int flags)
 {
 	struct _mempool_gntalloc_info mgi;
 	struct rte_mempool *mp = NULL;
@@ -114,7 +114,7 @@ _create_mempool(const char *name, unsigned elt_num, unsigned elt_size,
 	pg_shift = rte_bsf32(pg_sz);
 
 	rte_mempool_calc_obj_size(elt_size, flags, &objsz);
-	sz = rte_mempool_xmem_size(elt_num, objsz.total_size, pg_shift);
+	sz = rte_mempool_xmem_size(elt_num, objsz.total_size, pg_shift, flags);
 	pg_num = sz >> pg_shift;
 
 	pa_arr = calloc(pg_num, sizeof(pa_arr[0]));
@@ -162,7 +162,8 @@ _create_mempool(const char *name, unsigned elt_num, unsigned elt_size,
 	 * Check that allocated size is big enough to hold elt_num
 	 * objects and a calcualte how many bytes are actually required.
 	 */
-	usz = rte_mempool_xmem_usage(va, elt_num, objsz.total_size, pa_arr, pg_num, pg_shift);
+	usz = rte_mempool_xmem_usage(va, elt_num, objsz.total_size, pa_arr,
+				     pg_num, pg_shift, flags);
 	if (usz < 0) {
 		mp = NULL;
 		i = pg_num;
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 237665c65..005240042 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -238,7 +238,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  * Calculate maximum amount of memory required to store given number of objects.
  */
 size_t
-rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift)
+rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
+		      __rte_unused unsigned int flags)
 {
 	size_t obj_per_page, pg_num, pg_sz;
 
@@ -264,7 +265,7 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift)
 ssize_t
 rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const phys_addr_t paddr[], uint32_t pg_num,
-	uint32_t pg_shift)
+	uint32_t pg_shift, __rte_unused unsigned int flags)
 {
 	uint32_t elt_cnt = 0;
 	phys_addr_t start, end;
@@ -543,7 +544,8 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 	for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) {
-		size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift);
+		size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift,
+						mp->flags);
 
 		ret = snprintf(mz_name, sizeof(mz_name),
 			RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id);
@@ -600,7 +602,8 @@ get_anon_size(const struct rte_mempool *mp)
 	pg_sz = getpagesize();
 	pg_shift = rte_bsf32(pg_sz);
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
-	size = rte_mempool_xmem_size(mp->size, total_elt_sz, pg_shift);
+	size = rte_mempool_xmem_size(mp->size, total_elt_sz, pg_shift,
+					mp->flags);
 
 	return size;
 }
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index bf65d62fe..202854f30 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -1476,11 +1476,13 @@ uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  *   by rte_mempool_calc_obj_size().
  * @param pg_shift
  *   LOG2 of the physical pages size. If set to 0, ignore page boundaries.
+ * @param flags
+ *  The mempool flag.
  * @return
  *   Required memory size aligned at page boundary.
  */
 size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
-	uint32_t pg_shift);
+	uint32_t pg_shift, unsigned int flags);
 
 /**
  * Get the size of memory required to store mempool elements.
@@ -1503,6 +1505,8 @@ size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
  *   Number of elements in the paddr array.
  * @param pg_shift
  *   LOG2 of the physical pages size.
+ * @param flags
+ *  The mempool flag.
  * @return
  *   On success, the number of bytes needed to store given number of
  *   objects, aligned to the given page size. If the provided memory
@@ -1511,7 +1515,7 @@ size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
  */
 ssize_t rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const phys_addr_t paddr[], uint32_t pg_num,
-	uint32_t pg_shift);
+	uint32_t pg_shift, unsigned int flags);
 
 /**
  * Walk list of all memory pools
diff --git a/test/test/test_mempool.c b/test/test/test_mempool.c
index 47dc3ac5f..a225e1209 100644
--- a/test/test/test_mempool.c
+++ b/test/test/test_mempool.c
@@ -474,7 +474,7 @@ test_mempool_same_name_twice_creation(void)
 }
 
 /*
- * BAsic test for mempool_xmem functions.
+ * Basic test for mempool_xmem functions.
  */
 static int
 test_mempool_xmem_misc(void)
@@ -485,10 +485,11 @@ test_mempool_xmem_misc(void)
 
 	elt_num = MAX_KEEP;
 	total_size = rte_mempool_calc_obj_size(MEMPOOL_ELT_SIZE, 0, NULL);
-	sz = rte_mempool_xmem_size(elt_num, total_size, MEMPOOL_PG_SHIFT_MAX);
+	sz = rte_mempool_xmem_size(elt_num, total_size, MEMPOOL_PG_SHIFT_MAX,
+					0);
 
 	usz = rte_mempool_xmem_usage(NULL, elt_num, total_size, 0, 1,
-		MEMPOOL_PG_SHIFT_MAX);
+		MEMPOOL_PG_SHIFT_MAX, 0);
 
 	if (sz != (size_t)usz)  {
 		printf("failure @ %s: rte_mempool_xmem_usage(%u, %u) "
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v5 4/8] doc: remove mempool notice
  2017-09-06 11:28       ` [PATCH v5 0/8] Infrastructure to support octeontx HW mempool manager Santosh Shukla
                           ` (2 preceding siblings ...)
  2017-09-06 11:28         ` [PATCH v5 3/8] mempool: add flags arg in xmem size and usage Santosh Shukla
@ 2017-09-06 11:28         ` Santosh Shukla
  2017-09-07  7:47           ` Olivier MATZ
  2017-09-06 11:28         ` [PATCH v5 5/8] mempool: get the mempool capability Santosh Shukla
                           ` (4 subsequent siblings)
  8 siblings, 1 reply; 116+ messages in thread
From: Santosh Shukla @ 2017-09-06 11:28 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

Removed mempool deprecation notice and
updated change info in release_17.11.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v4 --> v5:
- Since mp param replaced by flags param in patch [03/08], 
  incorporated same change in doc.

 doc/guides/rel_notes/deprecation.rst   | 9 ---------
 doc/guides/rel_notes/release_17_11.rst | 7 +++++++
 2 files changed, 7 insertions(+), 9 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 3362f3350..0e4cb1f95 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -44,15 +44,6 @@ Deprecation Notices
   PKT_RX_QINQ_STRIPPED, that are better described. The old flags and
   their behavior will be kept until 17.08 and will be removed in 17.11.
 
-* mempool: The following will be modified in 17.11:
-
-  - ``rte_mempool_xmem_size`` and ``rte_mempool_xmem_usage`` need to know
-    the mempool flag status so adding new param rte_mempool in those API.
-  - Removing __rte_unused int flag param from ``rte_mempool_generic_put``
-    and ``rte_mempool_generic_get`` API.
-  - ``rte_mempool`` flags data type will changed from int to
-    unsigned int.
-
 * ethdev: Tx offloads will no longer be enabled by default in 17.11.
   Instead, the ``rte_eth_txmode`` structure will be extended with
   bit field to enable each Tx offload.
diff --git a/doc/guides/rel_notes/release_17_11.rst b/doc/guides/rel_notes/release_17_11.rst
index 170f4f916..6b17af7bc 100644
--- a/doc/guides/rel_notes/release_17_11.rst
+++ b/doc/guides/rel_notes/release_17_11.rst
@@ -110,6 +110,13 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =========================================================
 
+* **The following changes made in mempool library**
+
+  * Moved ``flags`` datatype from int to unsigned int for ``rte_mempool``.
+  * Removed ``__rte_unused int flag`` param from ``rte_mempool_generic_put``
+    and ``rte_mempool_generic_get`` API.
+  * Added ``flags`` param in ``rte_mempool_xmem_size`` and
+    ``rte_mempool_xmem_usage``.
 
 ABI Changes
 -----------
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v5 5/8] mempool: get the mempool capability
  2017-09-06 11:28       ` [PATCH v5 0/8] Infrastructure to support octeontx HW mempool manager Santosh Shukla
                           ` (3 preceding siblings ...)
  2017-09-06 11:28         ` [PATCH v5 4/8] doc: remove mempool notice Santosh Shukla
@ 2017-09-06 11:28         ` Santosh Shukla
  2017-09-07  7:59           ` Olivier MATZ
  2017-09-06 11:28         ` [PATCH v5 6/8] mempool: detect physical contiguous object in pool Santosh Shukla
                           ` (3 subsequent siblings)
  8 siblings, 1 reply; 116+ messages in thread
From: Santosh Shukla @ 2017-09-06 11:28 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

Allow mempool driver to advertise his pool capability.
For that pupose, an api(rte_mempool_ops_get_capabilities)
and ->get_capability() handler has been introduced.
- Upon ->get_capabilities() call, mempool driver will advertise
his capability by oring to mempool 'flags'.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v4 --> v5:
- Added flags as second param in get_capability api (suggested by Olivier)
- Removed 80 char warning. (suggested by Olivier)
- Upadted API description, now explicitly mentioning that update as a
  Or'ed operation by mempool handle. (suggested by Olivier)
refer [1].
[1] http://dpdk.org/dev/patchwork/patch/27598/

 lib/librte_mempool/rte_mempool.c           |  6 ++++++
 lib/librte_mempool/rte_mempool.h           | 26 ++++++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c       | 15 +++++++++++++++
 lib/librte_mempool/rte_mempool_version.map |  7 +++++++
 4 files changed, 54 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 005240042..3c4a096b7 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -528,6 +528,12 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	if (mp->nb_mem_chunks != 0)
 		return -EEXIST;
 
+	/* Get mempool capability */
+	ret = rte_mempool_ops_get_capabilities(mp, &mp->flags);
+	if (ret < 0)
+		RTE_LOG(DEBUG, MEMPOOL, "get_capability not supported for %s\n",
+					mp->name);
+
 	if (rte_xen_dom0_supported()) {
 		pg_sz = RTE_PGSIZE_2M;
 		pg_shift = rte_bsf32(pg_sz);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 202854f30..4fb538962 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -389,6 +389,12 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
  */
 typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
 
+/**
+ * Get the mempool capability.
+ */
+typedef int (*rte_mempool_get_capabilities_t)(const struct rte_mempool *mp,
+		unsigned int *flags);
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -397,6 +403,10 @@ struct rte_mempool_ops {
 	rte_mempool_enqueue_t enqueue;   /**< Enqueue an object. */
 	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
 	rte_mempool_get_count get_count; /**< Get qty of available objs. */
+	/**
+	 * Get the pool capability
+	 */
+	rte_mempool_get_capabilities_t get_capabilities;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -509,6 +519,22 @@ unsigned
 rte_mempool_ops_get_count(const struct rte_mempool *mp);
 
 /**
+ * @internal wrapper for mempool_ops get_capabilities callback.
+ *
+ * @param mp [in]
+ *   Pointer to the memory pool.
+ * @param flags [out]
+ *   Pointer to the mempool flag.
+ * @return
+ *   - 0: Success; mempool driver has advetised his pool capability by Oring to
+ *   flags param.
+ *   - <0: Error; code of capability function.
+ */
+int
+rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
+					unsigned int *flags);
+
+/**
  * @internal wrapper for mempool_ops free callback.
  *
  * @param mp
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 5f24de250..9f605ae2d 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -37,6 +37,7 @@
 
 #include <rte_mempool.h>
 #include <rte_errno.h>
+#include <rte_dev.h>
 
 /* indirect jump table to support external memory pools. */
 struct rte_mempool_ops_table rte_mempool_ops_table = {
@@ -85,6 +86,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->enqueue = h->enqueue;
 	ops->dequeue = h->dequeue;
 	ops->get_count = h->get_count;
+	ops->get_capabilities = h->get_capabilities;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -123,6 +125,19 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp)
 	return ops->get_count(mp);
 }
 
+/* wrapper to get external mempool capability. */
+int
+rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
+					unsigned int *flags)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->get_capabilities, -ENOTSUP);
+	return ops->get_capabilities(mp, flags);
+}
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index f9c079447..3c3471507 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -41,3 +41,10 @@ DPDK_16.07 {
 	rte_mempool_set_ops_byname;
 
 } DPDK_2.0;
+
+DPDK_17.11 {
+	global:
+
+	rte_mempool_ops_get_capabilities;
+
+} DPDK_16.07;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v5 6/8] mempool: detect physical contiguous object in pool
  2017-09-06 11:28       ` [PATCH v5 0/8] Infrastructure to support octeontx HW mempool manager Santosh Shukla
                           ` (4 preceding siblings ...)
  2017-09-06 11:28         ` [PATCH v5 5/8] mempool: get the mempool capability Santosh Shukla
@ 2017-09-06 11:28         ` Santosh Shukla
  2017-09-07  8:05           ` Olivier MATZ
  2017-09-06 11:28         ` [PATCH v5 7/8] mempool: introduce block size align flag Santosh Shukla
                           ` (2 subsequent siblings)
  8 siblings, 1 reply; 116+ messages in thread
From: Santosh Shukla @ 2017-09-06 11:28 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

The memory area containing all the objects must be physically
contiguous.
Introducing MEMPOOL_F_CAPA_PHYS_CONTIG flag for such use-case.

The flag useful to detect whether pool area has sufficient space
to fit all objects. If not then return -ENOSPC.
This way, we make sure that all object within a pool is contiguous.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v4 --> v5:
- Pulled example api description mentioned by Olivier, I find it
  more correct description as used that as is in the patch.
Refer [1].
[1] http://dpdk.org/dev/patchwork/patch/27599/

 lib/librte_mempool/rte_mempool.c | 10 ++++++++++
 lib/librte_mempool/rte_mempool.h |  6 ++++++
 2 files changed, 16 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 3c4a096b7..103fbd0ed 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -369,6 +369,16 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 
+	/* Detect pool area has sufficient space for elements */
+	if (mp->flags & MEMPOOL_F_CAPA_PHYS_CONTIG) {
+		if (len < total_elt_sz * mp->size) {
+			RTE_LOG(ERR, MEMPOOL,
+				"pool area %" PRIx64 " not enough\n",
+				(uint64_t)len);
+			return -ENOSPC;
+		}
+	}
+
 	memhdr = rte_zmalloc("MEMPOOL_MEMHDR", sizeof(*memhdr), 0);
 	if (memhdr == NULL)
 		return -ENOMEM;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 4fb538962..63688faff 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -265,6 +265,12 @@ struct rte_mempool {
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
+/**
+ * This capability flag is advertised by a mempool handler, if the whole
+ * memory area containing the objects must be physically contiguous.
+ * Note: This flag should not be passed by application.
+ */
+#define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040
 
 /**
  * @internal When debug is enabled, store some statistics.
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v5 7/8] mempool: introduce block size align flag
  2017-09-06 11:28       ` [PATCH v5 0/8] Infrastructure to support octeontx HW mempool manager Santosh Shukla
                           ` (5 preceding siblings ...)
  2017-09-06 11:28         ` [PATCH v5 6/8] mempool: detect physical contiguous object in pool Santosh Shukla
@ 2017-09-06 11:28         ` Santosh Shukla
  2017-09-07  8:13           ` Olivier MATZ
  2017-09-06 11:28         ` [PATCH v5 8/8] mempool: update range info to pool Santosh Shukla
  2017-09-07 15:30         ` [PATCH v6 0/8] Infrastructure to support octeontx HW mempool manager Santosh Shukla
  8 siblings, 1 reply; 116+ messages in thread
From: Santosh Shukla @ 2017-09-06 11:28 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

Some mempool hw like octeontx/fpa block, demands block size
(/total_elem_sz) aligned object start address.

Introducing an MEMPOOL_F_BLK_ALIGNED_OBJECTS flag.
If this flag is set:
- Align object start address(vaddr) to a multiple of total_elt_sz.
- Allocate one additional object. Additional object is needed to make
  sure that requested 'n' object gets correctly populated. Example:

- Let's say that we get 'x' size of memory chunk from memzone.
- And application has requested 'n' object from mempool.
- Ideally, we start using objects at start address 0 to...(x-block_sz)
  for n obj.
- Not necessarily first object address i.e. 0 is aligned to block_sz.
- So we derive 'offset' value for block_sz alignment purpose i.e..'off'.
- That 'off' makes sure that start address of object is blk_sz aligned.
- Calculating 'off' may end up sacrificing first block_sz area of
  memzone area x. So total number of the object which can fit in the
  pool area is n-1, Which is incorrect behavior.

Therefore we request one additional object (/block_sz area) from memzone
when F_BLK_ALIGNED_OBJECTS flag is set.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v4 --> v5:
- Added vaddr in git description of patch (suggested by Olivier)
- Renamed to aligned flag to MEMPOOL_F_BLK_ALIGNED_OBJECTS (suggested by
  Olivier)
Refer [1].
[1] http://dpdk.org/dev/patchwork/patch/27600/

 lib/librte_mempool/rte_mempool.c | 17 ++++++++++++++---
 lib/librte_mempool/rte_mempool.h |  4 ++++
 2 files changed, 18 insertions(+), 3 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 103fbd0ed..38dab1067 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -239,10 +239,14 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  */
 size_t
 rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
-		      __rte_unused unsigned int flags)
+		      unsigned int flags)
 {
 	size_t obj_per_page, pg_num, pg_sz;
 
+	if (flags & MEMPOOL_F_BLK_ALIGNED_OBJECTS)
+		/* alignment need one additional object */
+		elt_num += 1;
+
 	if (total_elt_sz == 0)
 		return 0;
 
@@ -265,13 +269,17 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
 ssize_t
 rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const phys_addr_t paddr[], uint32_t pg_num,
-	uint32_t pg_shift, __rte_unused unsigned int flags)
+	uint32_t pg_shift, unsigned int flags)
 {
 	uint32_t elt_cnt = 0;
 	phys_addr_t start, end;
 	uint32_t paddr_idx;
 	size_t pg_sz = (size_t)1 << pg_shift;
 
+	if (flags & MEMPOOL_F_BLK_ALIGNED_OBJECTS)
+		/* alignment need one additional object */
+		elt_num += 1;
+
 	/* if paddr is NULL, assume contiguous memory */
 	if (paddr == NULL) {
 		start = 0;
@@ -390,7 +398,10 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	memhdr->free_cb = free_cb;
 	memhdr->opaque = opaque;
 
-	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+	if (mp->flags & MEMPOOL_F_BLK_ALIGNED_OBJECTS)
+		/* align object start address to a multiple of total_elt_sz */
+		off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
+	else if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
 		off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
 	else
 		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 63688faff..110ffb601 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -271,6 +271,10 @@ struct rte_mempool {
  * Note: This flag should not be passed by application.
  */
 #define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040
+/**
+ * Align object start address to total elem size
+ */
+#define MEMPOOL_F_BLK_ALIGNED_OBJECTS 0x0080
 
 /**
  * @internal When debug is enabled, store some statistics.
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v5 8/8] mempool: update range info to pool
  2017-09-06 11:28       ` [PATCH v5 0/8] Infrastructure to support octeontx HW mempool manager Santosh Shukla
                           ` (6 preceding siblings ...)
  2017-09-06 11:28         ` [PATCH v5 7/8] mempool: introduce block size align flag Santosh Shukla
@ 2017-09-06 11:28         ` Santosh Shukla
  2017-09-07  8:30           ` Olivier MATZ
  2017-09-07 15:30         ` [PATCH v6 0/8] Infrastructure to support octeontx HW mempool manager Santosh Shukla
  8 siblings, 1 reply; 116+ messages in thread
From: Santosh Shukla @ 2017-09-06 11:28 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

HW pool manager e.g. Octeontx SoC demands s/w to program start and end
address of pool. Currently, there is no such handle in external mempool.
Introducing rte_mempool_update_range handle which will let HW(pool
manager) to know when common layer selects hugepage:
For each hugepage - update its start/end address to HW pool manager.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 lib/librte_mempool/rte_mempool.c           |  3 +++
 lib/librte_mempool/rte_mempool.h           | 22 ++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c       | 13 +++++++++++++
 lib/librte_mempool/rte_mempool_version.map |  1 +
 4 files changed, 39 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 38dab1067..65f17a7a7 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -363,6 +363,9 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	struct rte_mempool_memhdr *memhdr;
 	int ret;
 
+	/* update range info to mempool */
+	rte_mempool_ops_update_range(mp, vaddr, paddr, len);
+
 	/* create the internal ring if not already done */
 	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
 		ret = rte_mempool_ops_alloc(mp);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 110ffb601..dfde31c35 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -405,6 +405,12 @@ typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
 typedef int (*rte_mempool_get_capabilities_t)(const struct rte_mempool *mp,
 		unsigned int *flags);
 
+/**
+ * Update range info to mempool.
+ */
+typedef void (*rte_mempool_update_range_t)(const struct rte_mempool *mp,
+		char *vaddr, phys_addr_t paddr, size_t len);
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -417,6 +423,7 @@ struct rte_mempool_ops {
 	 * Get the pool capability
 	 */
 	rte_mempool_get_capabilities_t get_capabilities;
+	rte_mempool_update_range_t update_range; /**< Update range to mempool */
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -543,6 +550,21 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp);
 int
 rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
 					unsigned int *flags);
+/**
+ * @internal wrapper for mempool_ops update_range callback.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param vaddr
+ *   Pointer to the buffer virtual address
+ * @param paddr
+ *   Pointer to the buffer physical address
+ * @param len
+ *   Pool size
+ */
+void
+rte_mempool_ops_update_range(const struct rte_mempool *mp,
+				char *vaddr, phys_addr_t paddr, size_t len);
 
 /**
  * @internal wrapper for mempool_ops free callback.
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 9f605ae2d..549ade2d1 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -87,6 +87,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->dequeue = h->dequeue;
 	ops->get_count = h->get_count;
 	ops->get_capabilities = h->get_capabilities;
+	ops->update_range = h->update_range;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -138,6 +139,18 @@ rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
 	return ops->get_capabilities(mp, flags);
 }
 
+/* wrapper to update range info to external mempool */
+void
+rte_mempool_ops_update_range(const struct rte_mempool *mp, char *vaddr,
+			     phys_addr_t paddr, size_t len)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+	RTE_FUNC_PTR_OR_RET(ops->update_range);
+	ops->update_range(mp, vaddr, paddr, len);
+}
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 3c3471507..2663001c3 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -46,5 +46,6 @@ DPDK_17.11 {
 	global:
 
 	rte_mempool_ops_get_capabilities;
+	rte_mempool_ops_update_range;
 
 } DPDK_16.07;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* Re: [PATCH v4 6/7] mempool: introduce block size align flag
  2017-09-04 17:45           ` santosh
@ 2017-09-07  7:27             ` Olivier MATZ
  2017-09-07  7:37               ` santosh
  0 siblings, 1 reply; 116+ messages in thread
From: Olivier MATZ @ 2017-09-07  7:27 UTC (permalink / raw)
  To: santosh; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Mon, Sep 04, 2017 at 11:15:50PM +0530, santosh wrote:
> 
> On Monday 04 September 2017 09:50 PM, Olivier MATZ wrote:
> > On Tue, Aug 15, 2017 at 11:37:42AM +0530, Santosh Shukla wrote:
> >> Some mempool hw like octeontx/fpa block, demands block size
> >> (/total_elem_sz) aligned object start address.
> >>
> >> Introducing an MEMPOOL_F_POOL_BLK_SZ_ALIGNED flag.
> >> If this flag is set:
> >> - Align object start address to a multiple of total_elt_sz.
> > Please specify if it's virtual or physical address.
> 
> virtual address. Yes will mention in v5. Thanks.
> 
> > What do you think about MEMPOOL_F_BLK_ALIGNED_OBJECTS instead?
> >
> > I don't really like BLK because the word "block" is not used anywhere
> > else in the mempool code. But I cannot find any good replacement for
> > it. If you have another idea, please suggest.
> >
> Ok with renaming to MEMPOOL_F_BLK_ALIGNED_OBJECTS
> 
> >> - Allocate one additional object. Additional object is needed to make
> >>   sure that requested 'n' object gets correctly populated. Example:
> >>
> >> - Let's say that we get 'x' size of memory chunk from memzone.
> >> - And application has requested 'n' object from mempool.
> >> - Ideally, we start using objects at start address 0 to...(x-block_sz)
> >>   for n obj.
> >> - Not necessarily first object address i.e. 0 is aligned to block_sz.
> >> - So we derive 'offset' value for block_sz alignment purpose i.e..'off'.
> >> - That 'off' makes sure that start address of object is blk_sz
> >>   aligned.
> >> - Calculating 'off' may end up sacrificing first block_sz area of
> >>   memzone area x. So total number of the object which can fit in the
> >>   pool area is n-1, Which is incorrect behavior.
> >>
> >> Therefore we request one additional object (/block_sz area) from memzone
> >> when F_BLK_SZ_ALIGNED flag is set.
> >>
> >> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> >> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> >> ---
> >>  lib/librte_mempool/rte_mempool.c | 16 +++++++++++++---
> >>  lib/librte_mempool/rte_mempool.h |  1 +
> >>  2 files changed, 14 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> >> index 19e5e6ddf..7610f0d1f 100644
> >> --- a/lib/librte_mempool/rte_mempool.c
> >> +++ b/lib/librte_mempool/rte_mempool.c
> >> @@ -239,10 +239,14 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
> >>   */
> >>  size_t
> >>  rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
> >> -		      __rte_unused const struct rte_mempool *mp)
> >> +		      const struct rte_mempool *mp)
> >>  {
> >>  	size_t obj_per_page, pg_num, pg_sz;
> >>  
> >> +	if (mp && mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED)
> >> +		/* alignment need one additional object */
> >> +		elt_num += 1;
> >> +
> >>  	if (total_elt_sz == 0)
> >>  		return 0;
> > I'm wondering if it's correct if the mempool area is not contiguous.
> >
> > For instance:
> >  page size = 4096
> >  object size = 1900
> >  elt_num = 10
> >
> > With your calculation, you will request (11+2-1)/2 = 6 pages.
> > But actually you may need 10 pages (max), since the number of object per
> > page matching the alignement constraint is 1, not 2.
> >
> In our case, we set PMD flag MEMPOOL_F_CAPA_PHYS_CONTIG to detect contiguity,
> would fail at pool creation time, as HW don't support.

Yes but here it's generic code. If MEMPOOL_F_POOL_BLK_SZ_ALIGNED implies
MEMPOOL_F_CAPA_PHYS_CONTIG, it should be enforced somewhere.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v4 6/7] mempool: introduce block size align flag
  2017-09-07  7:27             ` Olivier MATZ
@ 2017-09-07  7:37               ` santosh
  0 siblings, 0 replies; 116+ messages in thread
From: santosh @ 2017-09-07  7:37 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev, thomas, jerin.jacob, hemant.agrawal


On Thursday 07 September 2017 12:57 PM, Olivier MATZ wrote:
> On Mon, Sep 04, 2017 at 11:15:50PM +0530, santosh wrote:
>> On Monday 04 September 2017 09:50 PM, Olivier MATZ wrote:
>>> On Tue, Aug 15, 2017 at 11:37:42AM +0530, Santosh Shukla wrote:
>>>> Some mempool hw like octeontx/fpa block, demands block size
>>>> (/total_elem_sz) aligned object start address.
>>>>
>>>> Introducing an MEMPOOL_F_POOL_BLK_SZ_ALIGNED flag.
>>>> If this flag is set:
>>>> - Align object start address to a multiple of total_elt_sz.
>>> Please specify if it's virtual or physical address.
>> virtual address. Yes will mention in v5. Thanks.
>>
>>> What do you think about MEMPOOL_F_BLK_ALIGNED_OBJECTS instead?
>>>
>>> I don't really like BLK because the word "block" is not used anywhere
>>> else in the mempool code. But I cannot find any good replacement for
>>> it. If you have another idea, please suggest.
>>>
>> Ok with renaming to MEMPOOL_F_BLK_ALIGNED_OBJECTS
>>
>>>> - Allocate one additional object. Additional object is needed to make
>>>>   sure that requested 'n' object gets correctly populated. Example:
>>>>
>>>> - Let's say that we get 'x' size of memory chunk from memzone.
>>>> - And application has requested 'n' object from mempool.
>>>> - Ideally, we start using objects at start address 0 to...(x-block_sz)
>>>>   for n obj.
>>>> - Not necessarily first object address i.e. 0 is aligned to block_sz.
>>>> - So we derive 'offset' value for block_sz alignment purpose i.e..'off'.
>>>> - That 'off' makes sure that start address of object is blk_sz
>>>>   aligned.
>>>> - Calculating 'off' may end up sacrificing first block_sz area of
>>>>   memzone area x. So total number of the object which can fit in the
>>>>   pool area is n-1, Which is incorrect behavior.
>>>>
>>>> Therefore we request one additional object (/block_sz area) from memzone
>>>> when F_BLK_SZ_ALIGNED flag is set.
>>>>
>>>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>>>> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>>>> ---
>>>>  lib/librte_mempool/rte_mempool.c | 16 +++++++++++++---
>>>>  lib/librte_mempool/rte_mempool.h |  1 +
>>>>  2 files changed, 14 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
>>>> index 19e5e6ddf..7610f0d1f 100644
>>>> --- a/lib/librte_mempool/rte_mempool.c
>>>> +++ b/lib/librte_mempool/rte_mempool.c
>>>> @@ -239,10 +239,14 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
>>>>   */
>>>>  size_t
>>>>  rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
>>>> -		      __rte_unused const struct rte_mempool *mp)
>>>> +		      const struct rte_mempool *mp)
>>>>  {
>>>>  	size_t obj_per_page, pg_num, pg_sz;
>>>>  
>>>> +	if (mp && mp->flags & MEMPOOL_F_POOL_BLK_SZ_ALIGNED)
>>>> +		/* alignment need one additional object */
>>>> +		elt_num += 1;
>>>> +
>>>>  	if (total_elt_sz == 0)
>>>>  		return 0;
>>> I'm wondering if it's correct if the mempool area is not contiguous.
>>>
>>> For instance:
>>>  page size = 4096
>>>  object size = 1900
>>>  elt_num = 10
>>>
>>> With your calculation, you will request (11+2-1)/2 = 6 pages.
>>> But actually you may need 10 pages (max), since the number of object per
>>> page matching the alignement constraint is 1, not 2.
>>>
>> In our case, we set PMD flag MEMPOOL_F_CAPA_PHYS_CONTIG to detect contiguity,
>> would fail at pool creation time, as HW don't support.
> Yes but here it's generic code. If MEMPOOL_F_POOL_BLK_SZ_ALIGNED implies
> MEMPOOL_F_CAPA_PHYS_CONTIG, it should be enforced somewhere.
>
Right, 
Approach:
We agree to keep _F_CAPA_PHYS_CONTIG as flag not set by application,
i'm thinking to keep that way for _ALIGNED flag too, both setted by mempool
handler, such that mempool handler sets both the flag before pool creation time.
And above condition will check for both the flags? are you fine with that approach?
Or pl. suggest an alternative? As because right now octeontx mempool block _need_
aligned blocks and cares for contiguity. Having said that ext-mempool in particular
and mempool in general doesn't facilitate anything like that..

Thanks. 

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v5 1/8] mempool: remove unused flags argument
  2017-09-06 11:28         ` [PATCH v5 1/8] mempool: remove unused flags argument Santosh Shukla
@ 2017-09-07  7:41           ` Olivier MATZ
  0 siblings, 0 replies; 116+ messages in thread
From: Olivier MATZ @ 2017-09-07  7:41 UTC (permalink / raw)
  To: Santosh Shukla; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Wed, Sep 06, 2017 at 04:58:27PM +0530, Santosh Shukla wrote:
> * Remove redundant 'flags' API description from
>   - __mempool_generic_put
>   - __mempool_generic_get
>   - rte_mempool_generic_put
>   - rte_mempool_generic_get
> 
> * Remove unused 'flags' argument from
>   - rte_mempool_generic_put
>   - rte_mempool_generic_get
> 
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>

Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v5 2/8] mempool: change flags from int to unsigned int
  2017-09-06 11:28         ` [PATCH v5 2/8] mempool: change flags from int to unsigned int Santosh Shukla
@ 2017-09-07  7:43           ` Olivier MATZ
  0 siblings, 0 replies; 116+ messages in thread
From: Olivier MATZ @ 2017-09-07  7:43 UTC (permalink / raw)
  To: Santosh Shukla; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Wed, Sep 06, 2017 at 04:58:28PM +0530, Santosh Shukla wrote:
> mp->flags is int and mempool API writes unsigned int
> value in 'flags', so fix the 'flags' data type.
> 
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>

Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v5 3/8] mempool: add flags arg in xmem size and usage
  2017-09-06 11:28         ` [PATCH v5 3/8] mempool: add flags arg in xmem size and usage Santosh Shukla
@ 2017-09-07  7:46           ` Olivier MATZ
  2017-09-07  7:49             ` santosh
  0 siblings, 1 reply; 116+ messages in thread
From: Olivier MATZ @ 2017-09-07  7:46 UTC (permalink / raw)
  To: Santosh Shukla; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Wed, Sep 06, 2017 at 04:58:29PM +0530, Santosh Shukla wrote:
> @@ -1503,6 +1505,8 @@ size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
>   *   Number of elements in the paddr array.
>   * @param pg_shift
>   *   LOG2 of the physical pages size.
> + * @param flags
> + *  The mempool flag.
>   * @return
>   *   On success, the number of bytes needed to store given number of
>   *   objects, aligned to the given page size. If the provided memory

Minor typo: "the mempool flagS"

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v5 4/8] doc: remove mempool notice
  2017-09-06 11:28         ` [PATCH v5 4/8] doc: remove mempool notice Santosh Shukla
@ 2017-09-07  7:47           ` Olivier MATZ
  0 siblings, 0 replies; 116+ messages in thread
From: Olivier MATZ @ 2017-09-07  7:47 UTC (permalink / raw)
  To: Santosh Shukla; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Wed, Sep 06, 2017 at 04:58:30PM +0530, Santosh Shukla wrote:
> Removed mempool deprecation notice and
> updated change info in release_17.11.
> 
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>

Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v5 3/8] mempool: add flags arg in xmem size and usage
  2017-09-07  7:46           ` Olivier MATZ
@ 2017-09-07  7:49             ` santosh
  0 siblings, 0 replies; 116+ messages in thread
From: santosh @ 2017-09-07  7:49 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev, thomas, jerin.jacob, hemant.agrawal


On Thursday 07 September 2017 01:16 PM, Olivier MATZ wrote:
> On Wed, Sep 06, 2017 at 04:58:29PM +0530, Santosh Shukla wrote:
>> @@ -1503,6 +1505,8 @@ size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
>>   *   Number of elements in the paddr array.
>>   * @param pg_shift
>>   *   LOG2 of the physical pages size.
>> + * @param flags
>> + *  The mempool flag.
>>   * @return
>>   *   On success, the number of bytes needed to store given number of
>>   *   objects, aligned to the given page size. If the provided memory
> Minor typo: "the mempool flagS"
>
in v6, Thanks.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v5 5/8] mempool: get the mempool capability
  2017-09-06 11:28         ` [PATCH v5 5/8] mempool: get the mempool capability Santosh Shukla
@ 2017-09-07  7:59           ` Olivier MATZ
  2017-09-07  8:15             ` santosh
  0 siblings, 1 reply; 116+ messages in thread
From: Olivier MATZ @ 2017-09-07  7:59 UTC (permalink / raw)
  To: Santosh Shukla; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Wed, Sep 06, 2017 at 04:58:31PM +0530, Santosh Shukla wrote:
> Allow mempool driver to advertise his pool capability.
> For that pupose, an api(rte_mempool_ops_get_capabilities)

typo: pupose -> purpose

> ...
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -528,6 +528,12 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>  	if (mp->nb_mem_chunks != 0)
>  		return -EEXIST;
>  
> +	/* Get mempool capability */

capability -> capabilities

> +	ret = rte_mempool_ops_get_capabilities(mp, &mp->flags);
> +	if (ret < 0)
> +		RTE_LOG(DEBUG, MEMPOOL, "get_capability not supported for %s\n",
> +					mp->name);
> +

I think the error can be ignored only if it's -ENOTSUP.
Else, the error should be propagated.


> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -389,6 +389,12 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
>   */
>  typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
>  
> +/**
> + * Get the mempool capability.
> + */

capability -> capabilities

> +typedef int (*rte_mempool_get_capabilities_t)(const struct rte_mempool *mp,
> +		unsigned int *flags);
> +
>  /** Structure defining mempool operations structure */
>  struct rte_mempool_ops {
>  	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
> @@ -397,6 +403,10 @@ struct rte_mempool_ops {
>  	rte_mempool_enqueue_t enqueue;   /**< Enqueue an object. */
>  	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
>  	rte_mempool_get_count get_count; /**< Get qty of available objs. */
> +	/**
> +	 * Get the pool capability
> +	 */
> +	rte_mempool_get_capabilities_t get_capabilities;

capability -> capabilities


>  } __rte_cache_aligned;
>  
>  #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
> @@ -509,6 +519,22 @@ unsigned
>  rte_mempool_ops_get_count(const struct rte_mempool *mp);
>  
>  /**
> + * @internal wrapper for mempool_ops get_capabilities callback.
> + *
> + * @param mp [in]
> + *   Pointer to the memory pool.
> + * @param flags [out]
> + *   Pointer to the mempool flag.
> + * @return
> + *   - 0: Success; mempool driver has advetised his pool capability by Oring to
> + *   flags param.
> + *   - <0: Error; code of capability function.
> + */
> +int
> +rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
> +					unsigned int *flags);
> +
> +/**

The API is correct, but the flags should simply be returned, not or-ed.
I think it should be kept as simple as possible: a function called
get_somthing() is expected to return it without doing anything else.
Sorry if I wasn't clear in my previous message.

If there is a need to do a OR with mp->flags, it has to be done in the caller,
i.e. rte_mempool_populate_default().

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v5 6/8] mempool: detect physical contiguous object in pool
  2017-09-06 11:28         ` [PATCH v5 6/8] mempool: detect physical contiguous object in pool Santosh Shukla
@ 2017-09-07  8:05           ` Olivier MATZ
  0 siblings, 0 replies; 116+ messages in thread
From: Olivier MATZ @ 2017-09-07  8:05 UTC (permalink / raw)
  To: Santosh Shukla; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Wed, Sep 06, 2017 at 04:58:32PM +0530, Santosh Shukla wrote:
> The memory area containing all the objects must be physically
> contiguous.
> Introducing MEMPOOL_F_CAPA_PHYS_CONTIG flag for such use-case.
> 
> The flag useful to detect whether pool area has sufficient space
> to fit all objects. If not then return -ENOSPC.
> This way, we make sure that all object within a pool is contiguous.
> 
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>

Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v5 7/8] mempool: introduce block size align flag
  2017-09-06 11:28         ` [PATCH v5 7/8] mempool: introduce block size align flag Santosh Shukla
@ 2017-09-07  8:13           ` Olivier MATZ
  2017-09-07  8:27             ` santosh
  0 siblings, 1 reply; 116+ messages in thread
From: Olivier MATZ @ 2017-09-07  8:13 UTC (permalink / raw)
  To: Santosh Shukla; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Wed, Sep 06, 2017 at 04:58:33PM +0530, Santosh Shukla wrote:
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -271,6 +271,10 @@ struct rte_mempool {
>   * Note: This flag should not be passed by application.
>   */
>  #define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040
> +/**
> + * Align object start address to total elem size
> + */
> +#define MEMPOOL_F_BLK_ALIGNED_OBJECTS 0x0080

Same than with the other flag: since the meaning of this flag is not obvious
when we read the name, it has to be clearly described.
- say that it's virtual address
- say that it implies MEMPOOL_F_CAPA_PHYS_CONTIG
- say that it can be advertised by a driver and the application should
  not pass it

And, since it shall not be passed by an application, I suggest to add
_CAPA too (i.e. MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS).

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v5 5/8] mempool: get the mempool capability
  2017-09-07  7:59           ` Olivier MATZ
@ 2017-09-07  8:15             ` santosh
  2017-09-07  8:39               ` Olivier MATZ
  0 siblings, 1 reply; 116+ messages in thread
From: santosh @ 2017-09-07  8:15 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev, thomas, jerin.jacob, hemant.agrawal


On Thursday 07 September 2017 01:29 PM, Olivier MATZ wrote:
> On Wed, Sep 06, 2017 at 04:58:31PM +0530, Santosh Shukla wrote:
>> Allow mempool driver to advertise his pool capability.
>> For that pupose, an api(rte_mempool_ops_get_capabilities)
> typo: pupose -> purpose

v6.

>> ...
>> --- a/lib/librte_mempool/rte_mempool.c
>> +++ b/lib/librte_mempool/rte_mempool.c
>> @@ -528,6 +528,12 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>>  	if (mp->nb_mem_chunks != 0)
>>  		return -EEXIST;
>>  
>> +	/* Get mempool capability */
> capability -> capabilities

v6.

>> +	ret = rte_mempool_ops_get_capabilities(mp, &mp->flags);
>> +	if (ret < 0)
>> +		RTE_LOG(DEBUG, MEMPOOL, "get_capability not supported for %s\n",
>> +					mp->name);
>> +
> I think the error can be ignored only if it's -ENOTSUP.
> Else, the error should be propagated.

v6.

>
>> --- a/lib/librte_mempool/rte_mempool.h
>> +++ b/lib/librte_mempool/rte_mempool.h
>> @@ -389,6 +389,12 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
>>   */
>>  typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
>>  
>> +/**
>> + * Get the mempool capability.
>> + */
> capability -> capabilities
>
v6.

>> +typedef int (*rte_mempool_get_capabilities_t)(const struct rte_mempool *mp,
>> +		unsigned int *flags);
>> +
>>  /** Structure defining mempool operations structure */
>>  struct rte_mempool_ops {
>>  	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
>> @@ -397,6 +403,10 @@ struct rte_mempool_ops {
>>  	rte_mempool_enqueue_t enqueue;   /**< Enqueue an object. */
>>  	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
>>  	rte_mempool_get_count get_count; /**< Get qty of available objs. */
>> +	/**
>> +	 * Get the pool capability
>> +	 */
>> +	rte_mempool_get_capabilities_t get_capabilities;
> capability -> capabilities
>
v6.

>>  } __rte_cache_aligned;
>>  
>>  #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
>> @@ -509,6 +519,22 @@ unsigned
>>  rte_mempool_ops_get_count(const struct rte_mempool *mp);
>>  
>>  /**
>> + * @internal wrapper for mempool_ops get_capabilities callback.
>> + *
>> + * @param mp [in]
>> + *   Pointer to the memory pool.
>> + * @param flags [out]
>> + *   Pointer to the mempool flag.
>> + * @return
>> + *   - 0: Success; mempool driver has advetised his pool capability by Oring to
>> + *   flags param.
>> + *   - <0: Error; code of capability function.
>> + */
>> +int
>> +rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
>> +					unsigned int *flags);
>> +
>> +/**
> The API is correct, but the flags should simply be returned, not or-ed.
> I think it should be kept as simple as possible: a function called
> get_somthing() is expected to return it without doing anything else.
> Sorry if I wasn't clear in my previous message.
>
> If there is a need to do a OR with mp->flags, it has to be done in the caller,
> i.e. rte_mempool_populate_default().
>
pl. confirm : you want below approach:

unsigned int flags;
rte_mempool_ops_get_capabilities(mp, &flags)
mp->flags |= flags;

is that okay with you? i'll queue in v6

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v5 7/8] mempool: introduce block size align flag
  2017-09-07  8:13           ` Olivier MATZ
@ 2017-09-07  8:27             ` santosh
  2017-09-07  8:57               ` Olivier MATZ
  0 siblings, 1 reply; 116+ messages in thread
From: santosh @ 2017-09-07  8:27 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev, thomas, jerin.jacob, hemant.agrawal


On Thursday 07 September 2017 01:43 PM, Olivier MATZ wrote:
> On Wed, Sep 06, 2017 at 04:58:33PM +0530, Santosh Shukla wrote:
>> --- a/lib/librte_mempool/rte_mempool.h
>> +++ b/lib/librte_mempool/rte_mempool.h
>> @@ -271,6 +271,10 @@ struct rte_mempool {
>>   * Note: This flag should not be passed by application.
>>   */
>>  #define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040
>> +/**
>> + * Align object start address to total elem size
>> + */
>> +#define MEMPOOL_F_BLK_ALIGNED_OBJECTS 0x0080
> Same than with the other flag: since the meaning of this flag is not obvious
> when we read the name, it has to be clearly described.
> - say that it's virtual address
> - say that it implies MEMPOOL_F_CAPA_PHYS_CONTIG
> - say that it can be advertised by a driver and the application should
>   not pass it
>
> And, since it shall not be passed by an application, I suggest to add
> _CAPA too (i.e. MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS).
>
Ok, I will elaborate on FLAG description in v6,
and Rename to MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS.

Can you please suggest are you ok with
checking MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS | _PHYS_CONTIG
in _xmem_size()/_usage(), asked in v4 [1] for same patch.

[1] http://dpdk.org/dev/patchwork/patch/27600/ 

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v5 8/8] mempool: update range info to pool
  2017-09-06 11:28         ` [PATCH v5 8/8] mempool: update range info to pool Santosh Shukla
@ 2017-09-07  8:30           ` Olivier MATZ
  2017-09-07  8:56             ` santosh
  0 siblings, 1 reply; 116+ messages in thread
From: Olivier MATZ @ 2017-09-07  8:30 UTC (permalink / raw)
  To: Santosh Shukla; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Wed, Sep 06, 2017 at 04:58:34PM +0530, Santosh Shukla wrote:
> HW pool manager e.g. Octeontx SoC demands s/w to program start and end
> address of pool. Currently, there is no such handle in external mempool.
> Introducing rte_mempool_update_range handle which will let HW(pool
> manager) to know when common layer selects hugepage:
> For each hugepage - update its start/end address to HW pool manager.
> 
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> ---
>  lib/librte_mempool/rte_mempool.c           |  3 +++
>  lib/librte_mempool/rte_mempool.h           | 22 ++++++++++++++++++++++
>  lib/librte_mempool/rte_mempool_ops.c       | 13 +++++++++++++
>  lib/librte_mempool/rte_mempool_version.map |  1 +
>  4 files changed, 39 insertions(+)
> 
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index 38dab1067..65f17a7a7 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -363,6 +363,9 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
>  	struct rte_mempool_memhdr *memhdr;
>  	int ret;
>  
> +	/* update range info to mempool */
> +	rte_mempool_ops_update_range(mp, vaddr, paddr, len);
> +


My understanding is that the 2 capability flags imply that the mempool
is composed of only one memory area (rte_mempool_memhdr). Do you confirm?

So in your case, you will be notified only once with the full range of
the mempool. But if there are several memory areas, the function will
be called each time.

So I suggest to rename rte_mempool_ops_update_range() in
rte_mempool_ops_register_memory_area(), which goal is to notify the mempool
handler each time a new memory area is added.

This should be properly explained in the API comments.

I think this handler can return an error code (0 on success, negative on
error). On error, rte_mempool_populate_phys() should fail.



>  	/* create the internal ring if not already done */
>  	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
>  		ret = rte_mempool_ops_alloc(mp);
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index 110ffb601..dfde31c35 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -405,6 +405,12 @@ typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
>  typedef int (*rte_mempool_get_capabilities_t)(const struct rte_mempool *mp,
>  		unsigned int *flags);
>  
> +/**
> + * Update range info to mempool.
> + */
> +typedef void (*rte_mempool_update_range_t)(const struct rte_mempool *mp,
> +		char *vaddr, phys_addr_t paddr, size_t len);
> +
>  /** Structure defining mempool operations structure */
>  struct rte_mempool_ops {
>  	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
> @@ -417,6 +423,7 @@ struct rte_mempool_ops {
>  	 * Get the pool capability
>  	 */
>  	rte_mempool_get_capabilities_t get_capabilities;
> +	rte_mempool_update_range_t update_range; /**< Update range to mempool */
>  } __rte_cache_aligned;
>  
>  #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
> @@ -543,6 +550,21 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp);
>  int
>  rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
>  					unsigned int *flags);
> +/**
> + * @internal wrapper for mempool_ops update_range callback.
> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @param vaddr
> + *   Pointer to the buffer virtual address
> + * @param paddr
> + *   Pointer to the buffer physical address
> + * @param len
> + *   Pool size
> + */
> +void
> +rte_mempool_ops_update_range(const struct rte_mempool *mp,
> +				char *vaddr, phys_addr_t paddr, size_t len);
>  
>  /**
>   * @internal wrapper for mempool_ops free callback.
> diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
> index 9f605ae2d..549ade2d1 100644
> --- a/lib/librte_mempool/rte_mempool_ops.c
> +++ b/lib/librte_mempool/rte_mempool_ops.c
> @@ -87,6 +87,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
>  	ops->dequeue = h->dequeue;
>  	ops->get_count = h->get_count;
>  	ops->get_capabilities = h->get_capabilities;
> +	ops->update_range = h->update_range;
>  
>  	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
>  
> @@ -138,6 +139,18 @@ rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
>  	return ops->get_capabilities(mp, flags);
>  }
>  
> +/* wrapper to update range info to external mempool */
> +void
> +rte_mempool_ops_update_range(const struct rte_mempool *mp, char *vaddr,
> +			     phys_addr_t paddr, size_t len)
> +{
> +	struct rte_mempool_ops *ops;
> +
> +	ops = rte_mempool_get_ops(mp->ops_index);
> +	RTE_FUNC_PTR_OR_RET(ops->update_range);
> +	ops->update_range(mp, vaddr, paddr, len);
> +}
> +
>  /* sets mempool ops previously registered by rte_mempool_register_ops. */
>  int
>  rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
> diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
> index 3c3471507..2663001c3 100644
> --- a/lib/librte_mempool/rte_mempool_version.map
> +++ b/lib/librte_mempool/rte_mempool_version.map
> @@ -46,5 +46,6 @@ DPDK_17.11 {
>  	global:
>  
>  	rte_mempool_ops_get_capabilities;
> +	rte_mempool_ops_update_range;
>  
>  } DPDK_16.07;
> -- 
> 2.11.0
> 

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v5 5/8] mempool: get the mempool capability
  2017-09-07  8:15             ` santosh
@ 2017-09-07  8:39               ` Olivier MATZ
  0 siblings, 0 replies; 116+ messages in thread
From: Olivier MATZ @ 2017-09-07  8:39 UTC (permalink / raw)
  To: santosh; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Thu, Sep 07, 2017 at 01:45:58PM +0530, santosh wrote:
> > The API is correct, but the flags should simply be returned, not or-ed.
> > I think it should be kept as simple as possible: a function called
> > get_somthing() is expected to return it without doing anything else.
> > Sorry if I wasn't clear in my previous message.
> >
> > If there is a need to do a OR with mp->flags, it has to be done in the caller,
> > i.e. rte_mempool_populate_default().
> >
> pl. confirm : you want below approach:
> 
> unsigned int flags;
> rte_mempool_ops_get_capabilities(mp, &flags)
> mp->flags |= flags;
> 
> is that okay with you? i'll queue in v6
> 

yes, thanks

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v5 8/8] mempool: update range info to pool
  2017-09-07  8:30           ` Olivier MATZ
@ 2017-09-07  8:56             ` santosh
  2017-09-07  9:09               ` Olivier MATZ
  0 siblings, 1 reply; 116+ messages in thread
From: santosh @ 2017-09-07  8:56 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev, thomas, jerin.jacob, hemant.agrawal


On Thursday 07 September 2017 02:00 PM, Olivier MATZ wrote:
> On Wed, Sep 06, 2017 at 04:58:34PM +0530, Santosh Shukla wrote:
>> HW pool manager e.g. Octeontx SoC demands s/w to program start and end
>> address of pool. Currently, there is no such handle in external mempool.
>> Introducing rte_mempool_update_range handle which will let HW(pool
>> manager) to know when common layer selects hugepage:
>> For each hugepage - update its start/end address to HW pool manager.
>>
>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>> ---
>>  lib/librte_mempool/rte_mempool.c           |  3 +++
>>  lib/librte_mempool/rte_mempool.h           | 22 ++++++++++++++++++++++
>>  lib/librte_mempool/rte_mempool_ops.c       | 13 +++++++++++++
>>  lib/librte_mempool/rte_mempool_version.map |  1 +
>>  4 files changed, 39 insertions(+)
>>
>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
>> index 38dab1067..65f17a7a7 100644
>> --- a/lib/librte_mempool/rte_mempool.c
>> +++ b/lib/librte_mempool/rte_mempool.c
>> @@ -363,6 +363,9 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
>>  	struct rte_mempool_memhdr *memhdr;
>>  	int ret;
>>  
>> +	/* update range info to mempool */
>> +	rte_mempool_ops_update_range(mp, vaddr, paddr, len);
>> +
>
> My understanding is that the 2 capability flags imply that the mempool
> is composed of only one memory area (rte_mempool_memhdr). Do you confirm?

yes.

> So in your case, you will be notified only once with the full range of
> the mempool. But if there are several memory areas, the function will
> be called each time.
>
> So I suggest to rename rte_mempool_ops_update_range() in
> rte_mempool_ops_register_memory_area(), which goal is to notify the mempool
> handler each time a new memory area is added.
>
> This should be properly explained in the API comments.
>
> I think this handler can return an error code (0 on success, negative on
> error). On error, rte_mempool_populate_phys() should fail.
>
Will rename to rte_mempool_ops_register_memory_area() and
change return type from void to int.


return description:
0 : for success
<0 : failure, such that
 - if handler returns -ENOTSUP then valid error case--> no error handling at mempool layer
 - Otherwise  rte_mempool_populate_phys () fails.

Are you okay with error return? pl. confirm. 

Thanks.

>
>>  	/* create the internal ring if not already done */
>>  	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
>>  		ret = rte_mempool_ops_alloc(mp);
>> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
>> index 110ffb601..dfde31c35 100644
>> --- a/lib/librte_mempool/rte_mempool.h
>> +++ b/lib/librte_mempool/rte_mempool.h
>> @@ -405,6 +405,12 @@ typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
>>  typedef int (*rte_mempool_get_capabilities_t)(const struct rte_mempool *mp,
>>  		unsigned int *flags);
>>  
>> +/**
>> + * Update range info to mempool.
>> + */
>> +typedef void (*rte_mempool_update_range_t)(const struct rte_mempool *mp,
>> +		char *vaddr, phys_addr_t paddr, size_t len);
>> +
>>  /** Structure defining mempool operations structure */
>>  struct rte_mempool_ops {
>>  	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
>> @@ -417,6 +423,7 @@ struct rte_mempool_ops {
>>  	 * Get the pool capability
>>  	 */
>>  	rte_mempool_get_capabilities_t get_capabilities;
>> +	rte_mempool_update_range_t update_range; /**< Update range to mempool */
>>  } __rte_cache_aligned;
>>  
>>  #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
>> @@ -543,6 +550,21 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp);
>>  int
>>  rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
>>  					unsigned int *flags);
>> +/**
>> + * @internal wrapper for mempool_ops update_range callback.
>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + * @param vaddr
>> + *   Pointer to the buffer virtual address
>> + * @param paddr
>> + *   Pointer to the buffer physical address
>> + * @param len
>> + *   Pool size
>> + */
>> +void
>> +rte_mempool_ops_update_range(const struct rte_mempool *mp,
>> +				char *vaddr, phys_addr_t paddr, size_t len);
>>  
>>  /**
>>   * @internal wrapper for mempool_ops free callback.
>> diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
>> index 9f605ae2d..549ade2d1 100644
>> --- a/lib/librte_mempool/rte_mempool_ops.c
>> +++ b/lib/librte_mempool/rte_mempool_ops.c
>> @@ -87,6 +87,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
>>  	ops->dequeue = h->dequeue;
>>  	ops->get_count = h->get_count;
>>  	ops->get_capabilities = h->get_capabilities;
>> +	ops->update_range = h->update_range;
>>  
>>  	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
>>  
>> @@ -138,6 +139,18 @@ rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
>>  	return ops->get_capabilities(mp, flags);
>>  }
>>  
>> +/* wrapper to update range info to external mempool */
>> +void
>> +rte_mempool_ops_update_range(const struct rte_mempool *mp, char *vaddr,
>> +			     phys_addr_t paddr, size_t len)
>> +{
>> +	struct rte_mempool_ops *ops;
>> +
>> +	ops = rte_mempool_get_ops(mp->ops_index);
>> +	RTE_FUNC_PTR_OR_RET(ops->update_range);
>> +	ops->update_range(mp, vaddr, paddr, len);
>> +}
>> +
>>  /* sets mempool ops previously registered by rte_mempool_register_ops. */
>>  int
>>  rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
>> diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
>> index 3c3471507..2663001c3 100644
>> --- a/lib/librte_mempool/rte_mempool_version.map
>> +++ b/lib/librte_mempool/rte_mempool_version.map
>> @@ -46,5 +46,6 @@ DPDK_17.11 {
>>  	global:
>>  
>>  	rte_mempool_ops_get_capabilities;
>> +	rte_mempool_ops_update_range;
>>  
>>  } DPDK_16.07;
>> -- 
>> 2.11.0
>>

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v5 7/8] mempool: introduce block size align flag
  2017-09-07  8:27             ` santosh
@ 2017-09-07  8:57               ` Olivier MATZ
  0 siblings, 0 replies; 116+ messages in thread
From: Olivier MATZ @ 2017-09-07  8:57 UTC (permalink / raw)
  To: santosh; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Thu, Sep 07, 2017 at 01:57:57PM +0530, santosh wrote:
> 
> On Thursday 07 September 2017 01:43 PM, Olivier MATZ wrote:
> > On Wed, Sep 06, 2017 at 04:58:33PM +0530, Santosh Shukla wrote:
> >> --- a/lib/librte_mempool/rte_mempool.h
> >> +++ b/lib/librte_mempool/rte_mempool.h
> >> @@ -271,6 +271,10 @@ struct rte_mempool {
> >>   * Note: This flag should not be passed by application.
> >>   */
> >>  #define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040
> >> +/**
> >> + * Align object start address to total elem size
> >> + */
> >> +#define MEMPOOL_F_BLK_ALIGNED_OBJECTS 0x0080
> > Same than with the other flag: since the meaning of this flag is not obvious
> > when we read the name, it has to be clearly described.
> > - say that it's virtual address
> > - say that it implies MEMPOOL_F_CAPA_PHYS_CONTIG
> > - say that it can be advertised by a driver and the application should
> >   not pass it
> >
> > And, since it shall not be passed by an application, I suggest to add
> > _CAPA too (i.e. MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS).
> >
> Ok, I will elaborate on FLAG description in v6,
> and Rename to MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS.
> 
> Can you please suggest are you ok with
> checking MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS | _PHYS_CONTIG
> in _xmem_size()/_usage(), asked in v4 [1] for same patch.
> 
> [1] http://dpdk.org/dev/patchwork/patch/27600/ 
> 

yes, I'm ok with your proposition:
- MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS and _PHYS_CONTIG are capa flags,
  not set by application but by the handler
- the help says that _BLK_ALIGNED_OBJECTS implies _PHYS_CONTIG
- test both (_BLK_ALIGNED_OBJECTS | _PHYS_CONTIG) in _xmem_size()/_usage()

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v5 8/8] mempool: update range info to pool
  2017-09-07  8:56             ` santosh
@ 2017-09-07  9:09               ` Olivier MATZ
  0 siblings, 0 replies; 116+ messages in thread
From: Olivier MATZ @ 2017-09-07  9:09 UTC (permalink / raw)
  To: santosh; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Thu, Sep 07, 2017 at 02:26:49PM +0530, santosh wrote:
> 
> On Thursday 07 September 2017 02:00 PM, Olivier MATZ wrote:
> > On Wed, Sep 06, 2017 at 04:58:34PM +0530, Santosh Shukla wrote:
> >> HW pool manager e.g. Octeontx SoC demands s/w to program start and end
> >> address of pool. Currently, there is no such handle in external mempool.
> >> Introducing rte_mempool_update_range handle which will let HW(pool
> >> manager) to know when common layer selects hugepage:
> >> For each hugepage - update its start/end address to HW pool manager.
> >>
> >> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> >> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> >> ---
> >>  lib/librte_mempool/rte_mempool.c           |  3 +++
> >>  lib/librte_mempool/rte_mempool.h           | 22 ++++++++++++++++++++++
> >>  lib/librte_mempool/rte_mempool_ops.c       | 13 +++++++++++++
> >>  lib/librte_mempool/rte_mempool_version.map |  1 +
> >>  4 files changed, 39 insertions(+)
> >>
> >> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> >> index 38dab1067..65f17a7a7 100644
> >> --- a/lib/librte_mempool/rte_mempool.c
> >> +++ b/lib/librte_mempool/rte_mempool.c
> >> @@ -363,6 +363,9 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
> >>  	struct rte_mempool_memhdr *memhdr;
> >>  	int ret;
> >>  
> >> +	/* update range info to mempool */
> >> +	rte_mempool_ops_update_range(mp, vaddr, paddr, len);
> >> +
> >
> > My understanding is that the 2 capability flags imply that the mempool
> > is composed of only one memory area (rte_mempool_memhdr). Do you confirm?
> 
> yes.
> 
> > So in your case, you will be notified only once with the full range of
> > the mempool. But if there are several memory areas, the function will
> > be called each time.
> >
> > So I suggest to rename rte_mempool_ops_update_range() in
> > rte_mempool_ops_register_memory_area(), which goal is to notify the mempool
> > handler each time a new memory area is added.
> >
> > This should be properly explained in the API comments.
> >
> > I think this handler can return an error code (0 on success, negative on
> > error). On error, rte_mempool_populate_phys() should fail.
> >
> Will rename to rte_mempool_ops_register_memory_area() and
> change return type from void to int.
> 
> 
> return description:
> 0 : for success
> <0 : failure, such that
>  - if handler returns -ENOTSUP then valid error case--> no error handling at mempool layer
>  - Otherwise  rte_mempool_populate_phys () fails.
> 
> Are you okay with error return? pl. confirm. 

yes.
Just take care of doubled spaces and avoid "-->".

^ permalink raw reply	[flat|nested] 116+ messages in thread

* [PATCH v6 0/8] Infrastructure to support octeontx HW mempool manager
  2017-09-06 11:28       ` [PATCH v5 0/8] Infrastructure to support octeontx HW mempool manager Santosh Shukla
                           ` (7 preceding siblings ...)
  2017-09-06 11:28         ` [PATCH v5 8/8] mempool: update range info to pool Santosh Shukla
@ 2017-09-07 15:30         ` Santosh Shukla
  2017-09-07 15:30           ` [PATCH v6 1/8] mempool: remove unused flags argument Santosh Shukla
                             ` (9 more replies)
  8 siblings, 10 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-09-07 15:30 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

v6: 
Include v5 review change, suggested by Olivier.
Patches rebased on tip, commit:06791a4bcedf

v5:
Includes v4 review change, suggested by Olivier.

v4:
Include
- mempool deprecation changes, refer [1],
- patches are rebased against v17.11-rc0.

In order to support octeontx HW mempool manager, the common mempool layer must
meet below condition.
- Object start address should be block size (total elem size) aligned.
- Object must have the physically contiguous address within the pool.

And right now mempool doesn't support both.

Patchset adds infrastrucure to support both condition in a _generic_ way.
Proposed solution won't effect existing mempool drivers or its functionality.

Summary:
Introducing capability flag. Now mempool drivers can advertise their
capabilities to common mempool layer(at the pool creation time).
Handlers are introduced in order to support capability flag.

Flags:
* MEMPOOL_F_CAPA_PHYS_CONTIG - If flag is set then Detect whether the object
has the physically contiguous address with in a hugepage.

* MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS - If flag is set then make sure that object
addresses are block size aligned.

API:
Two handles are introduced:
* rte_mempool_ops_get_capabilities - advertise mempool manager capability.
* rte_mempool_ops_register_memory_area - Notify memory area (start/end addr) to 
					 HW mempool manager.

Change History:
v5 --> v6:
- Renamed flag from MEMPOOL_F_BLK_ALIGNED_OBJECTS to
  MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS and updated API description (Suggested by
  Olivier)
- Muxed _ALIGNED capability flag with _PHY_CONTIG per v5 thread [5].
- Renamed API from rte_mempool_ops_update_range to 
  rte_mempool_ops_register_memory_area (Suggested by Olivier)
- Upadted API description for FLAGS and API both (Suggested by Olivier).

Refer individual patch for detailed change history.


v4 --> v5:
- Replaced mp param with flags param in xmem_size/_usage() api. (Suggested by
  Olivier)
- Renamed flags from MEMPOOL_F_POOL_BLK_SZ_ALIGNED to
  MEMPOOL_F_BLK_ALIGNED_OBJECTS (suggested by Olivier)
- added flag param in get_capabilities() handle (suggested by Olivier)


v3 --> v4:
* [01 - 02 - 03/07] mempool deprecation notice changes.
* [04 - 05 - 06 - 07/07] are v3 patches.

v2 --> v3:
(Note: v3 work is based on deprecation notice [1], It's for 17.11)
* Changed _version.map from 17.08 to 17.11.
* build fixes reported by stv_sys.
* Patchset rebased on upstream commit: da94a999.


v1 --> v2 :
* [01/06] Per deprecation notice [1], Changed rte_mempool 'flag'
  data type from int to unsigned int and removed flag param
  from _xmem_size/usage api.
* [02/06] Incorporated review feedback from v1 [2] (Suggested by Olivier)
* [03/06] Renamed flag to MEMPOOL_F_CAPA_PHYS_CONTIG
  and comment reworded. (Suggested by Olivier per v1 [3])
* [04/06] added new mempool arg in xmem_size/usage. (Suggested by Olivier)
* [05/06] patch description changed.
        - Removed elseif brakcet mix
        - removed sanity check for alignment
        - removed extra var delta
        - Removed __rte_unused from xmem_usage/size and added _BLK_SZ_ALIGN check.
        (Suggeted by Olivier per v1[4])
* [06/06] Added RTE_FUNC_PTR_OR_RET in rte_mempool_ops_update_ops.

Checkpatch status:
CLEAN.

Thanks.

[1] deprecation notice v2: http://dpdk.org/dev/patchwork/patch/27079/
[2] v1: http://dpdk.org/dev/patchwork/patch/25603/
[3] v1: http://dpdk.org/dev/patchwork/patch/25604/
[4] v1: http://dpdk.org/dev/patchwork/patch/25605/
[5] v5: http://dpdk.org/dev/patchwork/patch/28418/

Santosh Shukla (8):
  mempool: remove unused flags argument
  mempool: change flags from int to unsigned int
  mempool: add flags arg in xmem size and usage
  doc: remove mempool notice
  mempool: get the mempool capability
  mempool: detect physical contiguous object in pool
  mempool: introduce block size align flag
  mempool: notify memory area to pool

 doc/guides/rel_notes/deprecation.rst       |   9 ---
 doc/guides/rel_notes/release_17_11.rst     |   7 ++
 drivers/net/xenvirt/rte_mempool_gntalloc.c |   7 +-
 lib/librte_mempool/rte_mempool.c           |  58 ++++++++++++--
 lib/librte_mempool/rte_mempool.h           | 120 +++++++++++++++++++++++------
 lib/librte_mempool/rte_mempool_ops.c       |  29 +++++++
 lib/librte_mempool/rte_mempool_version.map |   8 ++
 test/test/test_mempool.c                   |  25 +++---
 test/test/test_mempool_perf.c              |   4 +-
 9 files changed, 209 insertions(+), 58 deletions(-)

-- 
2.14.1

^ permalink raw reply	[flat|nested] 116+ messages in thread

* [PATCH v6 1/8] mempool: remove unused flags argument
  2017-09-07 15:30         ` [PATCH v6 0/8] Infrastructure to support octeontx HW mempool manager Santosh Shukla
@ 2017-09-07 15:30           ` Santosh Shukla
  2017-09-07 15:30           ` [PATCH v6 2/8] mempool: change flags from int to unsigned int Santosh Shukla
                             ` (8 subsequent siblings)
  9 siblings, 0 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-09-07 15:30 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

* Remove redundant 'flags' API description from
  - __mempool_generic_put
  - __mempool_generic_get
  - rte_mempool_generic_put
  - rte_mempool_generic_get

* Remove unused 'flags' argument from
  - rte_mempool_generic_put
  - rte_mempool_generic_get

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 lib/librte_mempool/rte_mempool.h | 31 +++++++++----------------------
 test/test/test_mempool.c         | 18 +++++++++---------
 test/test/test_mempool_perf.c    |  4 ++--
 3 files changed, 20 insertions(+), 33 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 76b5b3b15..ec3884473 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -1034,13 +1034,10 @@ rte_mempool_default_cache(struct rte_mempool *mp, unsigned lcore_id)
  *   positive.
  * @param cache
  *   A pointer to a mempool cache structure. May be NULL if not needed.
- * @param flags
- *   The flags used for the mempool creation.
- *   Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers.
  */
 static __rte_always_inline void
 __mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
-		      unsigned n, struct rte_mempool_cache *cache)
+		      unsigned int n, struct rte_mempool_cache *cache)
 {
 	void **cache_objs;
 
@@ -1096,14 +1093,10 @@ __mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
  *   The number of objects to add in the mempool from the obj_table.
  * @param cache
  *   A pointer to a mempool cache structure. May be NULL if not needed.
- * @param flags
- *   The flags used for the mempool creation.
- *   Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers.
  */
 static __rte_always_inline void
 rte_mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
-			unsigned n, struct rte_mempool_cache *cache,
-			__rte_unused int flags)
+			unsigned int n, struct rte_mempool_cache *cache)
 {
 	__mempool_check_cookies(mp, obj_table, n, 0);
 	__mempool_generic_put(mp, obj_table, n, cache);
@@ -1125,11 +1118,11 @@ rte_mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
  */
 static __rte_always_inline void
 rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
-		     unsigned n)
+		     unsigned int n)
 {
 	struct rte_mempool_cache *cache;
 	cache = rte_mempool_default_cache(mp, rte_lcore_id());
-	rte_mempool_generic_put(mp, obj_table, n, cache, mp->flags);
+	rte_mempool_generic_put(mp, obj_table, n, cache);
 }
 
 /**
@@ -1160,16 +1153,13 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
  *   The number of objects to get, must be strictly positive.
  * @param cache
  *   A pointer to a mempool cache structure. May be NULL if not needed.
- * @param flags
- *   The flags used for the mempool creation.
- *   Single-consumer (MEMPOOL_F_SC_GET flag) or multi-consumers.
  * @return
  *   - >=0: Success; number of objects supplied.
  *   - <0: Error; code of ring dequeue function.
  */
 static __rte_always_inline int
 __mempool_generic_get(struct rte_mempool *mp, void **obj_table,
-		      unsigned n, struct rte_mempool_cache *cache)
+		      unsigned int n, struct rte_mempool_cache *cache)
 {
 	int ret;
 	uint32_t index, len;
@@ -1241,16 +1231,13 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table,
  *   The number of objects to get from mempool to obj_table.
  * @param cache
  *   A pointer to a mempool cache structure. May be NULL if not needed.
- * @param flags
- *   The flags used for the mempool creation.
- *   Single-consumer (MEMPOOL_F_SC_GET flag) or multi-consumers.
  * @return
  *   - 0: Success; objects taken.
  *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
  */
 static __rte_always_inline int
-rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n,
-			struct rte_mempool_cache *cache, __rte_unused int flags)
+rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table,
+			unsigned int n, struct rte_mempool_cache *cache)
 {
 	int ret;
 	ret = __mempool_generic_get(mp, obj_table, n, cache);
@@ -1282,11 +1269,11 @@ rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n,
  *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
  */
 static __rte_always_inline int
-rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
+rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned int n)
 {
 	struct rte_mempool_cache *cache;
 	cache = rte_mempool_default_cache(mp, rte_lcore_id());
-	return rte_mempool_generic_get(mp, obj_table, n, cache, mp->flags);
+	return rte_mempool_generic_get(mp, obj_table, n, cache);
 }
 
 /**
diff --git a/test/test/test_mempool.c b/test/test/test_mempool.c
index 0a4423954..47dc3ac5f 100644
--- a/test/test/test_mempool.c
+++ b/test/test/test_mempool.c
@@ -129,7 +129,7 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 	rte_mempool_dump(stdout, mp);
 
 	printf("get an object\n");
-	if (rte_mempool_generic_get(mp, &obj, 1, cache, 0) < 0)
+	if (rte_mempool_generic_get(mp, &obj, 1, cache) < 0)
 		GOTO_ERR(ret, out);
 	rte_mempool_dump(stdout, mp);
 
@@ -152,21 +152,21 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 #endif
 
 	printf("put the object back\n");
-	rte_mempool_generic_put(mp, &obj, 1, cache, 0);
+	rte_mempool_generic_put(mp, &obj, 1, cache);
 	rte_mempool_dump(stdout, mp);
 
 	printf("get 2 objects\n");
-	if (rte_mempool_generic_get(mp, &obj, 1, cache, 0) < 0)
+	if (rte_mempool_generic_get(mp, &obj, 1, cache) < 0)
 		GOTO_ERR(ret, out);
-	if (rte_mempool_generic_get(mp, &obj2, 1, cache, 0) < 0) {
-		rte_mempool_generic_put(mp, &obj, 1, cache, 0);
+	if (rte_mempool_generic_get(mp, &obj2, 1, cache) < 0) {
+		rte_mempool_generic_put(mp, &obj, 1, cache);
 		GOTO_ERR(ret, out);
 	}
 	rte_mempool_dump(stdout, mp);
 
 	printf("put the objects back\n");
-	rte_mempool_generic_put(mp, &obj, 1, cache, 0);
-	rte_mempool_generic_put(mp, &obj2, 1, cache, 0);
+	rte_mempool_generic_put(mp, &obj, 1, cache);
+	rte_mempool_generic_put(mp, &obj2, 1, cache);
 	rte_mempool_dump(stdout, mp);
 
 	/*
@@ -178,7 +178,7 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 		GOTO_ERR(ret, out);
 
 	for (i = 0; i < MEMPOOL_SIZE; i++) {
-		if (rte_mempool_generic_get(mp, &objtable[i], 1, cache, 0) < 0)
+		if (rte_mempool_generic_get(mp, &objtable[i], 1, cache) < 0)
 			break;
 	}
 
@@ -200,7 +200,7 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 				ret = -1;
 		}
 
-		rte_mempool_generic_put(mp, &objtable[i], 1, cache, 0);
+		rte_mempool_generic_put(mp, &objtable[i], 1, cache);
 	}
 
 	free(objtable);
diff --git a/test/test/test_mempool_perf.c b/test/test/test_mempool_perf.c
index 07b28c066..3b8f7de7c 100644
--- a/test/test/test_mempool_perf.c
+++ b/test/test/test_mempool_perf.c
@@ -186,7 +186,7 @@ per_lcore_mempool_test(void *arg)
 				ret = rte_mempool_generic_get(mp,
 							      &obj_table[idx],
 							      n_get_bulk,
-							      cache, 0);
+							      cache);
 				if (unlikely(ret < 0)) {
 					rte_mempool_dump(stdout, mp);
 					/* in this case, objects are lost... */
@@ -200,7 +200,7 @@ per_lcore_mempool_test(void *arg)
 			while (idx < n_keep) {
 				rte_mempool_generic_put(mp, &obj_table[idx],
 							n_put_bulk,
-							cache, 0);
+							cache);
 				idx += n_put_bulk;
 			}
 		}
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v6 2/8] mempool: change flags from int to unsigned int
  2017-09-07 15:30         ` [PATCH v6 0/8] Infrastructure to support octeontx HW mempool manager Santosh Shukla
  2017-09-07 15:30           ` [PATCH v6 1/8] mempool: remove unused flags argument Santosh Shukla
@ 2017-09-07 15:30           ` Santosh Shukla
  2017-09-07 15:30           ` [PATCH v6 3/8] mempool: add flags arg in xmem size and usage Santosh Shukla
                             ` (7 subsequent siblings)
  9 siblings, 0 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-09-07 15:30 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

mp->flags is int and mempool API writes unsigned int
value in 'flags', so fix the 'flags' data type.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 lib/librte_mempool/rte_mempool.c | 4 ++--
 lib/librte_mempool/rte_mempool.h | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 6fc3c9c7c..237665c65 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -515,7 +515,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 int
 rte_mempool_populate_default(struct rte_mempool *mp)
 {
-	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
+	unsigned int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	const struct rte_memzone *mz;
 	size_t size, total_elt_sz, align, pg_sz, pg_shift;
@@ -742,7 +742,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 	struct rte_tailq_entry *te = NULL;
 	const struct rte_memzone *mz = NULL;
 	size_t mempool_size;
-	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
+	unsigned int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
 	struct rte_mempool_objsz objsz;
 	unsigned lcore_id;
 	int ret;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index ec3884473..bf65d62fe 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -226,7 +226,7 @@ struct rte_mempool {
 	};
 	void *pool_config;               /**< optional args for ops alloc. */
 	const struct rte_memzone *mz;    /**< Memzone where pool is alloc'd. */
-	int flags;                       /**< Flags of the mempool. */
+	unsigned int flags;              /**< Flags of the mempool. */
 	int socket_id;                   /**< Socket id passed at create. */
 	uint32_t size;                   /**< Max size of the mempool. */
 	uint32_t cache_size;
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v6 3/8] mempool: add flags arg in xmem size and usage
  2017-09-07 15:30         ` [PATCH v6 0/8] Infrastructure to support octeontx HW mempool manager Santosh Shukla
  2017-09-07 15:30           ` [PATCH v6 1/8] mempool: remove unused flags argument Santosh Shukla
  2017-09-07 15:30           ` [PATCH v6 2/8] mempool: change flags from int to unsigned int Santosh Shukla
@ 2017-09-07 15:30           ` Santosh Shukla
  2017-09-25 11:24             ` Olivier MATZ
  2017-09-07 15:30           ` [PATCH v6 4/8] doc: remove mempool notice Santosh Shukla
                             ` (6 subsequent siblings)
  9 siblings, 1 reply; 116+ messages in thread
From: Santosh Shukla @ 2017-09-07 15:30 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

xmem_size and xmem_usage need to know the status of mempool flags,
so add 'flags' arg in _xmem_size/usage() api.

Following patch will make use of that.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v5 --> v6:
- Fix 'flags' typo (Suggested by Olivier).

v4 --> v5:
- Removed 'mp' param and replaced with 'flags' param for
  xmem_size/_usage api. (suggested by Olivier)
Refer [1].
[1] http://dpdk.org/dev/patchwork/patch/27596/

 drivers/net/xenvirt/rte_mempool_gntalloc.c |  7 ++++---
 lib/librte_mempool/rte_mempool.c           | 11 +++++++----
 lib/librte_mempool/rte_mempool.h           |  8 ++++++--
 test/test/test_mempool.c                   |  7 ++++---
 4 files changed, 21 insertions(+), 12 deletions(-)

diff --git a/drivers/net/xenvirt/rte_mempool_gntalloc.c b/drivers/net/xenvirt/rte_mempool_gntalloc.c
index 73e82f808..7f7aecdc1 100644
--- a/drivers/net/xenvirt/rte_mempool_gntalloc.c
+++ b/drivers/net/xenvirt/rte_mempool_gntalloc.c
@@ -79,7 +79,7 @@ _create_mempool(const char *name, unsigned elt_num, unsigned elt_size,
 		   unsigned cache_size, unsigned private_data_size,
 		   rte_mempool_ctor_t *mp_init, void *mp_init_arg,
 		   rte_mempool_obj_cb_t *obj_init, void *obj_init_arg,
-		   int socket_id, unsigned flags)
+		   int socket_id, unsigned int flags)
 {
 	struct _mempool_gntalloc_info mgi;
 	struct rte_mempool *mp = NULL;
@@ -114,7 +114,7 @@ _create_mempool(const char *name, unsigned elt_num, unsigned elt_size,
 	pg_shift = rte_bsf32(pg_sz);
 
 	rte_mempool_calc_obj_size(elt_size, flags, &objsz);
-	sz = rte_mempool_xmem_size(elt_num, objsz.total_size, pg_shift);
+	sz = rte_mempool_xmem_size(elt_num, objsz.total_size, pg_shift, flags);
 	pg_num = sz >> pg_shift;
 
 	pa_arr = calloc(pg_num, sizeof(pa_arr[0]));
@@ -162,7 +162,8 @@ _create_mempool(const char *name, unsigned elt_num, unsigned elt_size,
 	 * Check that allocated size is big enough to hold elt_num
 	 * objects and a calcualte how many bytes are actually required.
 	 */
-	usz = rte_mempool_xmem_usage(va, elt_num, objsz.total_size, pa_arr, pg_num, pg_shift);
+	usz = rte_mempool_xmem_usage(va, elt_num, objsz.total_size, pa_arr,
+				     pg_num, pg_shift, flags);
 	if (usz < 0) {
 		mp = NULL;
 		i = pg_num;
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 237665c65..005240042 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -238,7 +238,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  * Calculate maximum amount of memory required to store given number of objects.
  */
 size_t
-rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift)
+rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
+		      __rte_unused unsigned int flags)
 {
 	size_t obj_per_page, pg_num, pg_sz;
 
@@ -264,7 +265,7 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift)
 ssize_t
 rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const phys_addr_t paddr[], uint32_t pg_num,
-	uint32_t pg_shift)
+	uint32_t pg_shift, __rte_unused unsigned int flags)
 {
 	uint32_t elt_cnt = 0;
 	phys_addr_t start, end;
@@ -543,7 +544,8 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 	for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) {
-		size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift);
+		size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift,
+						mp->flags);
 
 		ret = snprintf(mz_name, sizeof(mz_name),
 			RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id);
@@ -600,7 +602,8 @@ get_anon_size(const struct rte_mempool *mp)
 	pg_sz = getpagesize();
 	pg_shift = rte_bsf32(pg_sz);
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
-	size = rte_mempool_xmem_size(mp->size, total_elt_sz, pg_shift);
+	size = rte_mempool_xmem_size(mp->size, total_elt_sz, pg_shift,
+					mp->flags);
 
 	return size;
 }
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index bf65d62fe..85eb770dc 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -1476,11 +1476,13 @@ uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  *   by rte_mempool_calc_obj_size().
  * @param pg_shift
  *   LOG2 of the physical pages size. If set to 0, ignore page boundaries.
+ * @param flags
+ *  The mempool flags.
  * @return
  *   Required memory size aligned at page boundary.
  */
 size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
-	uint32_t pg_shift);
+	uint32_t pg_shift, unsigned int flags);
 
 /**
  * Get the size of memory required to store mempool elements.
@@ -1503,6 +1505,8 @@ size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
  *   Number of elements in the paddr array.
  * @param pg_shift
  *   LOG2 of the physical pages size.
+ * @param flags
+ *  The mempool flags.
  * @return
  *   On success, the number of bytes needed to store given number of
  *   objects, aligned to the given page size. If the provided memory
@@ -1511,7 +1515,7 @@ size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
  */
 ssize_t rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const phys_addr_t paddr[], uint32_t pg_num,
-	uint32_t pg_shift);
+	uint32_t pg_shift, unsigned int flags);
 
 /**
  * Walk list of all memory pools
diff --git a/test/test/test_mempool.c b/test/test/test_mempool.c
index 47dc3ac5f..a225e1209 100644
--- a/test/test/test_mempool.c
+++ b/test/test/test_mempool.c
@@ -474,7 +474,7 @@ test_mempool_same_name_twice_creation(void)
 }
 
 /*
- * BAsic test for mempool_xmem functions.
+ * Basic test for mempool_xmem functions.
  */
 static int
 test_mempool_xmem_misc(void)
@@ -485,10 +485,11 @@ test_mempool_xmem_misc(void)
 
 	elt_num = MAX_KEEP;
 	total_size = rte_mempool_calc_obj_size(MEMPOOL_ELT_SIZE, 0, NULL);
-	sz = rte_mempool_xmem_size(elt_num, total_size, MEMPOOL_PG_SHIFT_MAX);
+	sz = rte_mempool_xmem_size(elt_num, total_size, MEMPOOL_PG_SHIFT_MAX,
+					0);
 
 	usz = rte_mempool_xmem_usage(NULL, elt_num, total_size, 0, 1,
-		MEMPOOL_PG_SHIFT_MAX);
+		MEMPOOL_PG_SHIFT_MAX, 0);
 
 	if (sz != (size_t)usz)  {
 		printf("failure @ %s: rte_mempool_xmem_usage(%u, %u) "
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v6 4/8] doc: remove mempool notice
  2017-09-07 15:30         ` [PATCH v6 0/8] Infrastructure to support octeontx HW mempool manager Santosh Shukla
                             ` (2 preceding siblings ...)
  2017-09-07 15:30           ` [PATCH v6 3/8] mempool: add flags arg in xmem size and usage Santosh Shukla
@ 2017-09-07 15:30           ` Santosh Shukla
  2017-09-07 15:30           ` [PATCH v6 5/8] mempool: get the mempool capability Santosh Shukla
                             ` (5 subsequent siblings)
  9 siblings, 0 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-09-07 15:30 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

Removed mempool deprecation notice and
updated change info in release_17.11.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 doc/guides/rel_notes/deprecation.rst   | 9 ---------
 doc/guides/rel_notes/release_17_11.rst | 7 +++++++
 2 files changed, 7 insertions(+), 9 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 3362f3350..0e4cb1f95 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -44,15 +44,6 @@ Deprecation Notices
   PKT_RX_QINQ_STRIPPED, that are better described. The old flags and
   their behavior will be kept until 17.08 and will be removed in 17.11.
 
-* mempool: The following will be modified in 17.11:
-
-  - ``rte_mempool_xmem_size`` and ``rte_mempool_xmem_usage`` need to know
-    the mempool flag status so adding new param rte_mempool in those API.
-  - Removing __rte_unused int flag param from ``rte_mempool_generic_put``
-    and ``rte_mempool_generic_get`` API.
-  - ``rte_mempool`` flags data type will changed from int to
-    unsigned int.
-
 * ethdev: Tx offloads will no longer be enabled by default in 17.11.
   Instead, the ``rte_eth_txmode`` structure will be extended with
   bit field to enable each Tx offload.
diff --git a/doc/guides/rel_notes/release_17_11.rst b/doc/guides/rel_notes/release_17_11.rst
index 170f4f916..6b17af7bc 100644
--- a/doc/guides/rel_notes/release_17_11.rst
+++ b/doc/guides/rel_notes/release_17_11.rst
@@ -110,6 +110,13 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =========================================================
 
+* **The following changes made in mempool library**
+
+  * Moved ``flags`` datatype from int to unsigned int for ``rte_mempool``.
+  * Removed ``__rte_unused int flag`` param from ``rte_mempool_generic_put``
+    and ``rte_mempool_generic_get`` API.
+  * Added ``flags`` param in ``rte_mempool_xmem_size`` and
+    ``rte_mempool_xmem_usage``.
 
 ABI Changes
 -----------
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v6 5/8] mempool: get the mempool capability
  2017-09-07 15:30         ` [PATCH v6 0/8] Infrastructure to support octeontx HW mempool manager Santosh Shukla
                             ` (3 preceding siblings ...)
  2017-09-07 15:30           ` [PATCH v6 4/8] doc: remove mempool notice Santosh Shukla
@ 2017-09-07 15:30           ` Santosh Shukla
  2017-09-25 11:26             ` Olivier MATZ
  2017-09-07 15:30           ` [PATCH v6 6/8] mempool: detect physical contiguous object in pool Santosh Shukla
                             ` (4 subsequent siblings)
  9 siblings, 1 reply; 116+ messages in thread
From: Santosh Shukla @ 2017-09-07 15:30 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

Allow the mempool driver to advertise his pool capabilities.
For that pupose, an api(rte_mempool_ops_get_capabilities)
and ->get_capabilities() handler has been introduced.
- Upon ->get_capabilities() call, mempool driver will advertise
his capabilities to mempool flags param.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v5 --> v6:
- Fixed typos from Capability to capabilities (suggested by Olivier)
- Now mp->flags |= flag happens at mempool layer (suggested by Olivier)
- For -ENOTSUP return type - patch ignore and allows not supported message,
For other error value <0, returns from rte_mempool_populate_default().
- Updated API error description for same.

For history: refer [1].
[1] http://dpdk.org/dev/patchwork/patch/28416/

v4 --> v5:
- Added flags as second param in get_capability api (suggested by Olivier)
- Removed 80 char warning. (suggested by Olivier)
- Upadted API description, now explicitly mentioning that update as a
  Or'ed operation by mempool handle. (suggested by Olivier)

refer [2].
[1] http://dpdk.org/dev/patchwork/patch/27598/

 lib/librte_mempool/rte_mempool.c           | 13 +++++++++++++
 lib/librte_mempool/rte_mempool.h           | 27 +++++++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c       | 15 +++++++++++++++
 lib/librte_mempool/rte_mempool_version.map |  7 +++++++
 4 files changed, 62 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 005240042..92de39562 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -522,12 +522,25 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	size_t size, total_elt_sz, align, pg_sz, pg_shift;
 	phys_addr_t paddr;
 	unsigned mz_id, n;
+	unsigned int mp_flags;
 	int ret;
 
 	/* mempool must not be populated */
 	if (mp->nb_mem_chunks != 0)
 		return -EEXIST;
 
+	/* Get mempool capabilities */
+	mp_flags = 0;
+	ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
+	if (ret == -ENOTSUP)
+		RTE_LOG(DEBUG, MEMPOOL, "get_capability not supported for %s\n",
+					mp->name);
+	else if (ret < 0)
+		return ret;
+
+	/* update mempool capabilities */
+	mp->flags |= mp_flags;
+
 	if (rte_xen_dom0_supported()) {
 		pg_sz = RTE_PGSIZE_2M;
 		pg_shift = rte_bsf32(pg_sz);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 85eb770dc..d251d4255 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -389,6 +389,12 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
  */
 typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
 
+/**
+ * Get the mempool capabilities.
+ */
+typedef int (*rte_mempool_get_capabilities_t)(const struct rte_mempool *mp,
+		unsigned int *flags);
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -397,6 +403,10 @@ struct rte_mempool_ops {
 	rte_mempool_enqueue_t enqueue;   /**< Enqueue an object. */
 	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
 	rte_mempool_get_count get_count; /**< Get qty of available objs. */
+	/**
+	 * Get the mempool capabilities
+	 */
+	rte_mempool_get_capabilities_t get_capabilities;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -508,6 +518,23 @@ rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
 unsigned
 rte_mempool_ops_get_count(const struct rte_mempool *mp);
 
+/**
+ * @internal wrapper for mempool_ops get_capabilities callback.
+ *
+ * @param mp [in]
+ *   Pointer to the memory pool.
+ * @param flags [out]
+ *   Pointer to the mempool flags.
+ * @return
+ *   - 0: Success; The mempool driver has advertised his pool capabilities in
+ *   flags param.
+ *   - -ENOTSUP - doesn't support get_capabilities ops (valid case).
+ *   - Otherwise, pool create fails.
+ */
+int
+rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
+					unsigned int *flags);
+
 /**
  * @internal wrapper for mempool_ops free callback.
  *
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 5f24de250..f2af5e5bb 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -37,6 +37,7 @@
 
 #include <rte_mempool.h>
 #include <rte_errno.h>
+#include <rte_dev.h>
 
 /* indirect jump table to support external memory pools. */
 struct rte_mempool_ops_table rte_mempool_ops_table = {
@@ -85,6 +86,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->enqueue = h->enqueue;
 	ops->dequeue = h->dequeue;
 	ops->get_count = h->get_count;
+	ops->get_capabilities = h->get_capabilities;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -123,6 +125,19 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp)
 	return ops->get_count(mp);
 }
 
+/* wrapper to get external mempool capabilities. */
+int
+rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
+					unsigned int *flags)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->get_capabilities, -ENOTSUP);
+	return ops->get_capabilities(mp, flags);
+}
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index f9c079447..3c3471507 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -41,3 +41,10 @@ DPDK_16.07 {
 	rte_mempool_set_ops_byname;
 
 } DPDK_2.0;
+
+DPDK_17.11 {
+	global:
+
+	rte_mempool_ops_get_capabilities;
+
+} DPDK_16.07;
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v6 6/8] mempool: detect physical contiguous object in pool
  2017-09-07 15:30         ` [PATCH v6 0/8] Infrastructure to support octeontx HW mempool manager Santosh Shukla
                             ` (4 preceding siblings ...)
  2017-09-07 15:30           ` [PATCH v6 5/8] mempool: get the mempool capability Santosh Shukla
@ 2017-09-07 15:30           ` Santosh Shukla
  2017-09-07 15:30           ` [PATCH v6 7/8] mempool: introduce block size align flag Santosh Shukla
                             ` (3 subsequent siblings)
  9 siblings, 0 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-09-07 15:30 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

The memory area containing all the objects must be physically
contiguous.
Introducing MEMPOOL_F_CAPA_PHYS_CONTIG flag for such use-case.

The flag useful to detect whether pool area has sufficient space
to fit all objects. If not then return -ENOSPC.
This way, we make sure that all object within a pool is contiguous.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 lib/librte_mempool/rte_mempool.c | 10 ++++++++++
 lib/librte_mempool/rte_mempool.h |  6 ++++++
 2 files changed, 16 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 92de39562..146e38675 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -369,6 +369,16 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 
+	/* Detect pool area has sufficient space for elements */
+	if (mp->flags & MEMPOOL_F_CAPA_PHYS_CONTIG) {
+		if (len < total_elt_sz * mp->size) {
+			RTE_LOG(ERR, MEMPOOL,
+				"pool area %" PRIx64 " not enough\n",
+				(uint64_t)len);
+			return -ENOSPC;
+		}
+	}
+
 	memhdr = rte_zmalloc("MEMPOOL_MEMHDR", sizeof(*memhdr), 0);
 	if (memhdr == NULL)
 		return -ENOMEM;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index d251d4255..734392556 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -265,6 +265,12 @@ struct rte_mempool {
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
+/**
+ * This capability flag is advertised by a mempool handler, if the whole
+ * memory area containing the objects must be physically contiguous.
+ * Note: This flag should not be passed by application.
+ */
+#define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040
 
 /**
  * @internal When debug is enabled, store some statistics.
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v6 7/8] mempool: introduce block size align flag
  2017-09-07 15:30         ` [PATCH v6 0/8] Infrastructure to support octeontx HW mempool manager Santosh Shukla
                             ` (5 preceding siblings ...)
  2017-09-07 15:30           ` [PATCH v6 6/8] mempool: detect physical contiguous object in pool Santosh Shukla
@ 2017-09-07 15:30           ` Santosh Shukla
  2017-09-22 12:59             ` Hemant Agrawal
  2017-09-25 11:32             ` Olivier MATZ
  2017-09-07 15:30           ` [PATCH v6 8/8] mempool: notify memory area to pool Santosh Shukla
                             ` (2 subsequent siblings)
  9 siblings, 2 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-09-07 15:30 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

Some mempool hw like octeontx/fpa block, demands block size
(/total_elem_sz) aligned object start address.

Introducing an MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag.
If this flag is set:
- Align object start address(vaddr) to a multiple of total_elt_sz.
- Allocate one additional object. Additional object is needed to make
  sure that requested 'n' object gets correctly populated.

Example:
- Let's say that we get 'x' size of memory chunk from memzone.
- And application has requested 'n' object from mempool.
- Ideally, we start using objects at start address 0 to...(x-block_sz)
  for n obj.
- Not necessarily first object address i.e. 0 is aligned to block_sz.
- So we derive 'offset' value for block_sz alignment purpose i.e..'off'.
- That 'off' makes sure that start address of object is blk_sz aligned.
- Calculating 'off' may end up sacrificing first block_sz area of
  memzone area x. So total number of the object which can fit in the
  pool area is n-1, Which is incorrect behavior.

Therefore we request one additional object (/block_sz area) from memzone
when MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag is set.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v5 --> v6:
- Renamed from MEMPOOL_F_BLK_ALIGNED_OBJECTS to
  MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS. (Suggested by Olivier)
- Updated Capability flag descrioption (Suggested by Olivier) 

History refer [1]
[1] http://dpdk.org/dev/patchwork/patch/28418/

v4 --> v5:
- Added vaddr in git description of patch (suggested by Olivier)
- Renamed to aligned flag to MEMPOOL_F_BLK_ALIGNED_OBJECTS (suggested by
  Olivier)
Refer [2].
[2] http://dpdk.org/dev/patchwork/patch/27600/

 lib/librte_mempool/rte_mempool.c | 19 ++++++++++++++++---
 lib/librte_mempool/rte_mempool.h | 12 ++++++++++++
 2 files changed, 28 insertions(+), 3 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 146e38675..decdda3a6 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -239,10 +239,15 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  */
 size_t
 rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
-		      __rte_unused unsigned int flags)
+		      unsigned int flags)
 {
 	size_t obj_per_page, pg_num, pg_sz;
 
+	if (flags & (MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS |
+			MEMPOOL_F_CAPA_PHYS_CONTIG))
+		/* alignment need one additional object */
+		elt_num += 1;
+
 	if (total_elt_sz == 0)
 		return 0;
 
@@ -265,13 +270,18 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
 ssize_t
 rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const phys_addr_t paddr[], uint32_t pg_num,
-	uint32_t pg_shift, __rte_unused unsigned int flags)
+	uint32_t pg_shift, unsigned int flags)
 {
 	uint32_t elt_cnt = 0;
 	phys_addr_t start, end;
 	uint32_t paddr_idx;
 	size_t pg_sz = (size_t)1 << pg_shift;
 
+	if (flags & (MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS |
+			MEMPOOL_F_CAPA_PHYS_CONTIG))
+		/* alignment need one additional object */
+		elt_num += 1;
+
 	/* if paddr is NULL, assume contiguous memory */
 	if (paddr == NULL) {
 		start = 0;
@@ -390,7 +400,10 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	memhdr->free_cb = free_cb;
 	memhdr->opaque = opaque;
 
-	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+	if (mp->flags & MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS)
+		/* align object start address to a multiple of total_elt_sz */
+		off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
+	else if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
 		off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
 	else
 		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 734392556..24195dda0 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -271,6 +271,18 @@ struct rte_mempool {
  * Note: This flag should not be passed by application.
  */
 #define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040
+/**
+ * This capability flag is advertised by a mempool handler. Used for a case
+ * where mempool driver wants object start address(vaddr) aligned to block
+ * size(/ total element size).
+ *
+ * Note:
+ * - This flag should not be passed by application.
+ *   Flag used for mempool driver only.
+ * - Mempool driver must also set MEMPOOL_F_CAPA_PHYS_CONTIG flag along with
+ *   MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS.
+ */
+#define MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS 0x0080
 
 /**
  * @internal When debug is enabled, store some statistics.
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v6 8/8] mempool: notify memory area to pool
  2017-09-07 15:30         ` [PATCH v6 0/8] Infrastructure to support octeontx HW mempool manager Santosh Shukla
                             ` (6 preceding siblings ...)
  2017-09-07 15:30           ` [PATCH v6 7/8] mempool: introduce block size align flag Santosh Shukla
@ 2017-09-07 15:30           ` Santosh Shukla
  2017-09-25 11:41             ` Olivier MATZ
  2017-09-13  9:58           ` [PATCH v6 0/8] Infrastructure to support octeontx HW mempool manager santosh
  2017-10-01  9:28           ` [PATCH v7 " Santosh Shukla
  9 siblings, 1 reply; 116+ messages in thread
From: Santosh Shukla @ 2017-09-07 15:30 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

HW pool manager e.g. Octeontx SoC demands s/w to program start and end
address of pool. Currently, there is no such api in external mempool.
Introducing rte_mempool_ops_register_memory_area api which will let HW(pool
manager) to know when common layer selects hugepage:
For each hugepage - Notify its start/end address to HW pool manager.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v5 --> v6:
- Renamed from rte_mempool_ops_update_range to 
  rte_mempool_ops_register_memory_area (Suggested by Olivier)
- Now renamed api retuns int (Suggestd by Olivier)
- Update API description and Error details very explicitly
  (Suggested by Olivier)
Refer [1]
[1] http://dpdk.org/dev/patchwork/patch/28419/

 lib/librte_mempool/rte_mempool.c           |  5 +++++
 lib/librte_mempool/rte_mempool.h           | 34 ++++++++++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c       | 14 ++++++++++++
 lib/librte_mempool/rte_mempool_version.map |  1 +
 4 files changed, 54 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index decdda3a6..842382f58 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -365,6 +365,11 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	struct rte_mempool_memhdr *memhdr;
 	int ret;
 
+	/* update range info to mempool */
+	ret = rte_mempool_ops_register_memory_area(mp, vaddr, paddr, len);
+	if (ret != -ENOTSUP && ret < 0)
+		return ret;
+
 	/* create the internal ring if not already done */
 	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
 		ret = rte_mempool_ops_alloc(mp);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 24195dda0..8d0171b54 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -413,6 +413,12 @@ typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
 typedef int (*rte_mempool_get_capabilities_t)(const struct rte_mempool *mp,
 		unsigned int *flags);
 
+/**
+ * Notify new memory area to mempool.
+ */
+typedef int (*rte_mempool_ops_register_memory_area_t)
+(const struct rte_mempool *mp, char *vaddr, phys_addr_t paddr, size_t len);
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -425,6 +431,10 @@ struct rte_mempool_ops {
 	 * Get the mempool capabilities
 	 */
 	rte_mempool_get_capabilities_t get_capabilities;
+	/**
+	 * Notify new memory area to mempool
+	 */
+	rte_mempool_ops_register_memory_area_t register_memory_area;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -552,6 +562,30 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp);
 int
 rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
 					unsigned int *flags);
+/**
+ * @internal wrapper for mempool_ops register_memory_area callback.
+ * API to notify the mempool handler if a new memory area is added to pool.
+ *
+ * Mempool handler usually get notified once for the case of mempool get full
+ * range of memory area. However, if several memory areas exist then mempool
+ * handler gets notified each time.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param vaddr
+ *   Pointer to the buffer virtual address
+ * @param paddr
+ *   Pointer to the buffer physical address
+ * @param len
+ *   Pool size
+ * @return
+ *  - 0: Success;
+ *  - ENOTSUP: doesn't support register_memory_area ops (valid error case).
+ *  - Otherwise, rte_mempool_populate_phys fails thus pool create fails.
+ */
+int
+rte_mempool_ops_register_memory_area(const struct rte_mempool *mp,
+				char *vaddr, phys_addr_t paddr, size_t len);
 
 /**
  * @internal wrapper for mempool_ops free callback.
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index f2af5e5bb..a6b5f2002 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -87,6 +87,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->dequeue = h->dequeue;
 	ops->get_count = h->get_count;
 	ops->get_capabilities = h->get_capabilities;
+	ops->register_memory_area = h->register_memory_area;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -138,6 +139,19 @@ rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
 	return ops->get_capabilities(mp, flags);
 }
 
+/* wrapper to notify new memory area to external mempool */
+int
+rte_mempool_ops_register_memory_area(const struct rte_mempool *mp, char *vaddr,
+					phys_addr_t paddr, size_t len)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->register_memory_area, -ENOTSUP);
+	return ops->register_memory_area(mp, vaddr, paddr, len);
+}
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 3c3471507..2663001c3 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -46,5 +46,6 @@ DPDK_17.11 {
 	global:
 
 	rte_mempool_ops_get_capabilities;
+	rte_mempool_ops_update_range;
 
 } DPDK_16.07;
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* Re: [PATCH v6 0/8] Infrastructure to support octeontx HW mempool manager
  2017-09-07 15:30         ` [PATCH v6 0/8] Infrastructure to support octeontx HW mempool manager Santosh Shukla
                             ` (7 preceding siblings ...)
  2017-09-07 15:30           ` [PATCH v6 8/8] mempool: notify memory area to pool Santosh Shukla
@ 2017-09-13  9:58           ` santosh
  2017-09-19  8:26             ` santosh
  2017-10-01  9:28           ` [PATCH v7 " Santosh Shukla
  9 siblings, 1 reply; 116+ messages in thread
From: santosh @ 2017-09-13  9:58 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal

Hi Olivier,


On Thursday 07 September 2017 09:00 PM, Santosh Shukla wrote:
> v6: 
> Include v5 review change, suggested by Olivier.
> Patches rebased on tip, commit:06791a4bcedf

Are you ok with changeset in v6 series,
as its blocking external mempool driver[1].

Thanks.

[1] http://dpdk.org/ml/archives/dev/2017-August/073898.html

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v6 0/8] Infrastructure to support octeontx HW mempool manager
  2017-09-13  9:58           ` [PATCH v6 0/8] Infrastructure to support octeontx HW mempool manager santosh
@ 2017-09-19  8:26             ` santosh
  0 siblings, 0 replies; 116+ messages in thread
From: santosh @ 2017-09-19  8:26 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal

Hi Olivier,


On Wednesday 13 September 2017 03:28 PM, santosh wrote:
> Hi Olivier,
>
>
> On Thursday 07 September 2017 09:00 PM, Santosh Shukla wrote:
>> v6: 
>> Include v5 review change, suggested by Olivier.
>> Patches rebased on tip, commit:06791a4bcedf
> Are you ok with changeset in v6 series,
> as its blocking external mempool driver[1].
>
> Thanks.
>
> [1] http://dpdk.org/ml/archives/dev/2017-August/073898.html
>
Ping? we want this series merged in -rc1, essential for mempool and Octeontx pmd driver,
Can you pl. share your feedback? Shouldn;t be like getting feedback just before couple
of days left for -rc1.

Thanks.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v6 7/8] mempool: introduce block size align flag
  2017-09-07 15:30           ` [PATCH v6 7/8] mempool: introduce block size align flag Santosh Shukla
@ 2017-09-22 12:59             ` Hemant Agrawal
  2017-09-25 11:32             ` Olivier MATZ
  1 sibling, 0 replies; 116+ messages in thread
From: Hemant Agrawal @ 2017-09-22 12:59 UTC (permalink / raw)
  To: Santosh Shukla, olivier.matz, dev; +Cc: thomas, jerin.jacob

Tested-by:  Hemant Agrawal <hemant.agrawal@nxp.com>

On 9/7/2017 9:00 PM, Santosh Shukla wrote:
> Some mempool hw like octeontx/fpa block, demands block size
> (/total_elem_sz) aligned object start address.
>
> Introducing an MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag.
> If this flag is set:
> - Align object start address(vaddr) to a multiple of total_elt_sz.
> - Allocate one additional object. Additional object is needed to make
>   sure that requested 'n' object gets correctly populated.
>
> Example:
> - Let's say that we get 'x' size of memory chunk from memzone.
> - And application has requested 'n' object from mempool.
> - Ideally, we start using objects at start address 0 to...(x-block_sz)
>   for n obj.
> - Not necessarily first object address i.e. 0 is aligned to block_sz.
> - So we derive 'offset' value for block_sz alignment purpose i.e..'off'.
> - That 'off' makes sure that start address of object is blk_sz aligned.
> - Calculating 'off' may end up sacrificing first block_sz area of
>   memzone area x. So total number of the object which can fit in the
>   pool area is n-1, Which is incorrect behavior.
>
> Therefore we request one additional object (/block_sz area) from memzone
> when MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag is set.
>
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> ---
> v5 --> v6:
> - Renamed from MEMPOOL_F_BLK_ALIGNED_OBJECTS to
>   MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS. (Suggested by Olivier)
> - Updated Capability flag descrioption (Suggested by Olivier)
>
> History refer [1]
> [1] http://dpdk.org/dev/patchwork/patch/28418/
>
> v4 --> v5:
> - Added vaddr in git description of patch (suggested by Olivier)
> - Renamed to aligned flag to MEMPOOL_F_BLK_ALIGNED_OBJECTS (suggested by
>   Olivier)
> Refer [2].
> [2] http://dpdk.org/dev/patchwork/patch/27600/
>
>  lib/librte_mempool/rte_mempool.c | 19 ++++++++++++++++---
>  lib/librte_mempool/rte_mempool.h | 12 ++++++++++++
>  2 files changed, 28 insertions(+), 3 deletions(-)
>
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index 146e38675..decdda3a6 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -239,10 +239,15 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
>   */
>  size_t
>  rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
> -		      __rte_unused unsigned int flags)
> +		      unsigned int flags)
>  {
>  	size_t obj_per_page, pg_num, pg_sz;
>
> +	if (flags & (MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS |
> +			MEMPOOL_F_CAPA_PHYS_CONTIG))
> +		/* alignment need one additional object */
> +		elt_num += 1;
> +
>  	if (total_elt_sz == 0)
>  		return 0;
>
> @@ -265,13 +270,18 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
>  ssize_t
>  rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
>  	size_t total_elt_sz, const phys_addr_t paddr[], uint32_t pg_num,
> -	uint32_t pg_shift, __rte_unused unsigned int flags)
> +	uint32_t pg_shift, unsigned int flags)
>  {
>  	uint32_t elt_cnt = 0;
>  	phys_addr_t start, end;
>  	uint32_t paddr_idx;
>  	size_t pg_sz = (size_t)1 << pg_shift;
>
> +	if (flags & (MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS |
> +			MEMPOOL_F_CAPA_PHYS_CONTIG))
> +		/* alignment need one additional object */
> +		elt_num += 1;
> +
>  	/* if paddr is NULL, assume contiguous memory */
>  	if (paddr == NULL) {
>  		start = 0;
> @@ -390,7 +400,10 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
>  	memhdr->free_cb = free_cb;
>  	memhdr->opaque = opaque;
>
> -	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
> +	if (mp->flags & MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS)
> +		/* align object start address to a multiple of total_elt_sz */
> +		off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
> +	else if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
>  		off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
>  	else
>  		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index 734392556..24195dda0 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -271,6 +271,18 @@ struct rte_mempool {
>   * Note: This flag should not be passed by application.
>   */
>  #define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040
> +/**
> + * This capability flag is advertised by a mempool handler. Used for a case
> + * where mempool driver wants object start address(vaddr) aligned to block
> + * size(/ total element size).
> + *
> + * Note:
> + * - This flag should not be passed by application.
> + *   Flag used for mempool driver only.
> + * - Mempool driver must also set MEMPOOL_F_CAPA_PHYS_CONTIG flag along with
> + *   MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS.
> + */
> +#define MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS 0x0080
>
>  /**
>   * @internal When debug is enabled, store some statistics.
>

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v6 3/8] mempool: add flags arg in xmem size and usage
  2017-09-07 15:30           ` [PATCH v6 3/8] mempool: add flags arg in xmem size and usage Santosh Shukla
@ 2017-09-25 11:24             ` Olivier MATZ
  0 siblings, 0 replies; 116+ messages in thread
From: Olivier MATZ @ 2017-09-25 11:24 UTC (permalink / raw)
  To: Santosh Shukla; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Thu, Sep 07, 2017 at 09:00:37PM +0530, Santosh Shukla wrote:
> xmem_size and xmem_usage need to know the status of mempool flags,
> so add 'flags' arg in _xmem_size/usage() api.
> 
> Following patch will make use of that.
> 
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>

Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v6 5/8] mempool: get the mempool capability
  2017-09-07 15:30           ` [PATCH v6 5/8] mempool: get the mempool capability Santosh Shukla
@ 2017-09-25 11:26             ` Olivier MATZ
  0 siblings, 0 replies; 116+ messages in thread
From: Olivier MATZ @ 2017-09-25 11:26 UTC (permalink / raw)
  To: Santosh Shukla; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Thu, Sep 07, 2017 at 09:00:39PM +0530, Santosh Shukla wrote:
> Allow the mempool driver to advertise his pool capabilities.
> For that pupose, an api(rte_mempool_ops_get_capabilities)
> and ->get_capabilities() handler has been introduced.
> - Upon ->get_capabilities() call, mempool driver will advertise
> his capabilities to mempool flags param.
> 
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>

Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v6 7/8] mempool: introduce block size align flag
  2017-09-07 15:30           ` [PATCH v6 7/8] mempool: introduce block size align flag Santosh Shukla
  2017-09-22 12:59             ` Hemant Agrawal
@ 2017-09-25 11:32             ` Olivier MATZ
  2017-09-25 22:08               ` santosh
  1 sibling, 1 reply; 116+ messages in thread
From: Olivier MATZ @ 2017-09-25 11:32 UTC (permalink / raw)
  To: Santosh Shukla; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Thu, Sep 07, 2017 at 09:00:41PM +0530, Santosh Shukla wrote:
> Some mempool hw like octeontx/fpa block, demands block size
> (/total_elem_sz) aligned object start address.
> 
> Introducing an MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag.
> If this flag is set:
> - Align object start address(vaddr) to a multiple of total_elt_sz.
> - Allocate one additional object. Additional object is needed to make
>   sure that requested 'n' object gets correctly populated.
> 
> Example:
> - Let's say that we get 'x' size of memory chunk from memzone.
> - And application has requested 'n' object from mempool.
> - Ideally, we start using objects at start address 0 to...(x-block_sz)
>   for n obj.
> - Not necessarily first object address i.e. 0 is aligned to block_sz.
> - So we derive 'offset' value for block_sz alignment purpose i.e..'off'.
> - That 'off' makes sure that start address of object is blk_sz aligned.
> - Calculating 'off' may end up sacrificing first block_sz area of
>   memzone area x. So total number of the object which can fit in the
>   pool area is n-1, Which is incorrect behavior.
> 
> Therefore we request one additional object (/block_sz area) from memzone
> when MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag is set.
> 
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>
> [...]
>
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -239,10 +239,15 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
>   */
>  size_t
>  rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
> -		      __rte_unused unsigned int flags)
> +		      unsigned int flags)
>  {
>  	size_t obj_per_page, pg_num, pg_sz;
>  
> +	if (flags & (MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS |
> +			MEMPOOL_F_CAPA_PHYS_CONTIG))
> +		/* alignment need one additional object */
> +		elt_num += 1;
> +

In previous version, we agreed to test both _BLK_ALIGNED_OBJECTS
and _PHYS_CONTIG in _xmem_size()/_usage(). Here, the test will
also be true if only MEMPOOL_F_CAPA_PHYS_CONTIG is set.

If we want to test both, the test should be:

    mask = MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS | MEMPOOL_F_CAPA_PHYS_CONTIG;
    if ((flags & mask) == mask)

> @@ -265,13 +270,18 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
>  ssize_t
>  rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
>  	size_t total_elt_sz, const phys_addr_t paddr[], uint32_t pg_num,
> -	uint32_t pg_shift, __rte_unused unsigned int flags)
> +	uint32_t pg_shift, unsigned int flags)
>  {
>  	uint32_t elt_cnt = 0;
>  	phys_addr_t start, end;
>  	uint32_t paddr_idx;
>  	size_t pg_sz = (size_t)1 << pg_shift;
>  
> +	if (flags & (MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS |
> +			MEMPOOL_F_CAPA_PHYS_CONTIG))
> +		/* alignment need one additional object */
> +		elt_num += 1;
> +

Same here

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v6 8/8] mempool: notify memory area to pool
  2017-09-07 15:30           ` [PATCH v6 8/8] mempool: notify memory area to pool Santosh Shukla
@ 2017-09-25 11:41             ` Olivier MATZ
  2017-09-25 22:18               ` santosh
  0 siblings, 1 reply; 116+ messages in thread
From: Olivier MATZ @ 2017-09-25 11:41 UTC (permalink / raw)
  To: Santosh Shukla; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Thu, Sep 07, 2017 at 09:00:42PM +0530, Santosh Shukla wrote:
> HW pool manager e.g. Octeontx SoC demands s/w to program start and end
> address of pool. Currently, there is no such api in external mempool.
> Introducing rte_mempool_ops_register_memory_area api which will let HW(pool
> manager) to know when common layer selects hugepage:
> For each hugepage - Notify its start/end address to HW pool manager.
> 
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>
> [...]
>
> +/**
> + * @internal wrapper for mempool_ops register_memory_area callback.
> + * API to notify the mempool handler if a new memory area is added to pool.
> + *

if -> when

> + * Mempool handler usually get notified once for the case of mempool get full
> + * range of memory area. However, if several memory areas exist then mempool
> + * handler gets notified each time.

Not sure I understand this last paragraph.

> + *
> + * @param mp
> + *   Pointer to the memory pool.
> + * @param vaddr
> + *   Pointer to the buffer virtual address
> + * @param paddr
> + *   Pointer to the buffer physical address
> + * @param len
> + *   Pool size

Minor: missing dot at the end

> + * @return
> + *  - 0: Success;
> + *  - ENOTSUP: doesn't support register_memory_area ops (valid error case).

Missing minus before ENOTSUP.
The dot should be a semicolon instead.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v6 7/8] mempool: introduce block size align flag
  2017-09-25 11:32             ` Olivier MATZ
@ 2017-09-25 22:08               ` santosh
  0 siblings, 0 replies; 116+ messages in thread
From: santosh @ 2017-09-25 22:08 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev, thomas, jerin.jacob, hemant.agrawal


On Monday 25 September 2017 12:32 PM, Olivier MATZ wrote:
> On Thu, Sep 07, 2017 at 09:00:41PM +0530, Santosh Shukla wrote:
>> Some mempool hw like octeontx/fpa block, demands block size
>> (/total_elem_sz) aligned object start address.
>>
>> Introducing an MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag.
>> If this flag is set:
>> - Align object start address(vaddr) to a multiple of total_elt_sz.
>> - Allocate one additional object. Additional object is needed to make
>>   sure that requested 'n' object gets correctly populated.
>>
>> Example:
>> - Let's say that we get 'x' size of memory chunk from memzone.
>> - And application has requested 'n' object from mempool.
>> - Ideally, we start using objects at start address 0 to...(x-block_sz)
>>   for n obj.
>> - Not necessarily first object address i.e. 0 is aligned to block_sz.
>> - So we derive 'offset' value for block_sz alignment purpose i.e..'off'.
>> - That 'off' makes sure that start address of object is blk_sz aligned.
>> - Calculating 'off' may end up sacrificing first block_sz area of
>>   memzone area x. So total number of the object which can fit in the
>>   pool area is n-1, Which is incorrect behavior.
>>
>> Therefore we request one additional object (/block_sz area) from memzone
>> when MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag is set.
>>
>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>>
>> [...]
>>
>> --- a/lib/librte_mempool/rte_mempool.c
>> +++ b/lib/librte_mempool/rte_mempool.c
>> @@ -239,10 +239,15 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
>>   */
>>  size_t
>>  rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
>> -		      __rte_unused unsigned int flags)
>> +		      unsigned int flags)
>>  {
>>  	size_t obj_per_page, pg_num, pg_sz;
>>  
>> +	if (flags & (MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS |
>> +			MEMPOOL_F_CAPA_PHYS_CONTIG))
>> +		/* alignment need one additional object */
>> +		elt_num += 1;
>> +
> In previous version, we agreed to test both _BLK_ALIGNED_OBJECTS
> and _PHYS_CONTIG in _xmem_size()/_usage(). Here, the test will
> also be true if only MEMPOOL_F_CAPA_PHYS_CONTIG is set.
>
> If we want to test both, the test should be:
>
>     mask = MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS | MEMPOOL_F_CAPA_PHYS_CONTIG;
>     if ((flags & mask) == mask)

queued for v7. agree strict check. Thanks.

>> @@ -265,13 +270,18 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
>>  ssize_t
>>  rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
>>  	size_t total_elt_sz, const phys_addr_t paddr[], uint32_t pg_num,
>> -	uint32_t pg_shift, __rte_unused unsigned int flags)
>> +	uint32_t pg_shift, unsigned int flags)
>>  {
>>  	uint32_t elt_cnt = 0;
>>  	phys_addr_t start, end;
>>  	uint32_t paddr_idx;
>>  	size_t pg_sz = (size_t)1 << pg_shift;
>>  
>> +	if (flags & (MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS |
>> +			MEMPOOL_F_CAPA_PHYS_CONTIG))
>> +		/* alignment need one additional object */
>> +		elt_num += 1;
>> +
> Same here
>

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v6 8/8] mempool: notify memory area to pool
  2017-09-25 11:41             ` Olivier MATZ
@ 2017-09-25 22:18               ` santosh
  2017-09-29  4:53                 ` santosh
  0 siblings, 1 reply; 116+ messages in thread
From: santosh @ 2017-09-25 22:18 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev, thomas, jerin.jacob, hemant.agrawal


On Monday 25 September 2017 12:41 PM, Olivier MATZ wrote:
> On Thu, Sep 07, 2017 at 09:00:42PM +0530, Santosh Shukla wrote:
>> HW pool manager e.g. Octeontx SoC demands s/w to program start and end
>> address of pool. Currently, there is no such api in external mempool.
>> Introducing rte_mempool_ops_register_memory_area api which will let HW(pool
>> manager) to know when common layer selects hugepage:
>> For each hugepage - Notify its start/end address to HW pool manager.
>>
>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>>
>> [...]
>>
>> +/**
>> + * @internal wrapper for mempool_ops register_memory_area callback.
>> + * API to notify the mempool handler if a new memory area is added to pool.
>> + *
> if -> when

ok.

>> + * Mempool handler usually get notified once for the case of mempool get full
>> + * range of memory area. However, if several memory areas exist then mempool
>> + * handler gets notified each time.
> Not sure I understand this last paragraph.

Refer v5 history [1] for same.

[1] http://dpdk.org/dev/patchwork/patch/28419/

there will be a case where mempool handler may have more than one memory example, no-hugepage case.
In that case _register_memory_area() ops will be called for more than once.

In v5, you suggested to mention this case explicitly in api description.

If your not clear with write up then could you propose one and also are you fine
with [8/8] patch beside above note? planning to send v7 by tomorrow, appreciate if you answer question.

>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + * @param vaddr
>> + *   Pointer to the buffer virtual address
>> + * @param paddr
>> + *   Pointer to the buffer physical address
>> + * @param len
>> + *   Pool size
> Minor: missing dot at the end

ok.

>> + * @return
>> + *  - 0: Success;
>> + *  - ENOTSUP: doesn't support register_memory_area ops (valid error case).
> Missing minus before ENOTSUP.
> The dot should be a semicolon instead.
>
ok.

Thanks.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v6 8/8] mempool: notify memory area to pool
  2017-09-25 22:18               ` santosh
@ 2017-09-29  4:53                 ` santosh
  2017-09-29  8:20                   ` Olivier MATZ
  0 siblings, 1 reply; 116+ messages in thread
From: santosh @ 2017-09-29  4:53 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

Hi Olivier,


On Monday 25 September 2017 11:18 PM, santosh wrote:
> On Monday 25 September 2017 12:41 PM, Olivier MATZ wrote:
>> On Thu, Sep 07, 2017 at 09:00:42PM +0530, Santosh Shukla wrote:
>>> + * Mempool handler usually get notified once for the case of mempool get full
>>> + * range of memory area. However, if several memory areas exist then mempool
>>> + * handler gets notified each time.
>> Not sure I understand this last paragraph.
> Refer v5 history [1] for same.
>
> [1] http://dpdk.org/dev/patchwork/patch/28419/
>
> there will be a case where mempool handler may have more than one memory example, no-hugepage case.
> In that case _register_memory_area() ops will be called for more than once.
>
> In v5, you suggested to mention this case explicitly in api description.
>
> If your not clear with write up then could you propose one and also are you fine
> with [8/8] patch beside above note? planning to send v7 by tomorrow, appreciate if you answer question.

Ping?

IMO, remove above description keep it like:
"API to notify the mempool handler if a new memory area is added to pool." Is it ok with you? Can you pl. confirm, I need to send v7 and we want this series in -rc1, its blocking octeontx mempool and nw driver.. delayed review blocking progress.

>>> + *
>>> + * @param mp
>>> + *   Pointer to the memory pool.
>>> + * @param vaddr
>>> + *   Pointer to the buffer virtual address
>>> + * @param paddr
>>> + *   Pointer to the buffer physical address
>>> + * @param len
>>> + *   Pool size
>> Minor: missing dot at the end
> ok.
>
>>> + * @return
>>> + *  - 0: Success;
>>> + *  - ENOTSUP: doesn't support register_memory_area ops (valid error case).
>> Missing minus before ENOTSUP.
>> The dot should be a semicolon instead.
>>
> ok.
>
> Thanks.
>

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v6 8/8] mempool: notify memory area to pool
  2017-09-29  4:53                 ` santosh
@ 2017-09-29  8:20                   ` Olivier MATZ
  2017-09-29  8:25                     ` santosh
  0 siblings, 1 reply; 116+ messages in thread
From: Olivier MATZ @ 2017-09-29  8:20 UTC (permalink / raw)
  To: santosh; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Fri, Sep 29, 2017 at 05:53:43AM +0100, santosh wrote:
> Hi Olivier,
> 
> 
> On Monday 25 September 2017 11:18 PM, santosh wrote:
> > On Monday 25 September 2017 12:41 PM, Olivier MATZ wrote:
> >> On Thu, Sep 07, 2017 at 09:00:42PM +0530, Santosh Shukla wrote:
> >>> + * Mempool handler usually get notified once for the case of mempool get full
> >>> + * range of memory area. However, if several memory areas exist then mempool
> >>> + * handler gets notified each time.
> >> Not sure I understand this last paragraph.
> > Refer v5 history [1] for same.
> >
> > [1] http://dpdk.org/dev/patchwork/patch/28419/
> >
> > there will be a case where mempool handler may have more than one memory example, no-hugepage case.
> > In that case _register_memory_area() ops will be called for more than once.
> >
> > In v5, you suggested to mention this case explicitly in api description.
> >
> > If your not clear with write up then could you propose one and also are you fine
> > with [8/8] patch beside above note? planning to send v7 by tomorrow, appreciate if you answer question.
> 
> Ping?
> 
> IMO, remove above description keep it like:
> "API to notify the mempool handler if a new memory area is added to pool." Is it ok with you? Can you pl. confirm, I need to send v7 and we want this series in -rc1, its blocking octeontx mempool and nw driver.. delayed review blocking progress.

The proposed description is ok.
I have no other comment for the rest of the patch.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v6 8/8] mempool: notify memory area to pool
  2017-09-29  8:20                   ` Olivier MATZ
@ 2017-09-29  8:25                     ` santosh
  0 siblings, 0 replies; 116+ messages in thread
From: santosh @ 2017-09-29  8:25 UTC (permalink / raw)
  To: Olivier MATZ; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

Hi Olivier,


On Friday 29 September 2017 09:20 AM, Olivier MATZ wrote:
> On Fri, Sep 29, 2017 at 05:53:43AM +0100, santosh wrote:
>> Hi Olivier,
>>
>>
>> On Monday 25 September 2017 11:18 PM, santosh wrote:
>>> On Monday 25 September 2017 12:41 PM, Olivier MATZ wrote:
>>>> On Thu, Sep 07, 2017 at 09:00:42PM +0530, Santosh Shukla wrote:
>>>>> + * Mempool handler usually get notified once for the case of mempool get full
>>>>> + * range of memory area. However, if several memory areas exist then mempool
>>>>> + * handler gets notified each time.
>>>> Not sure I understand this last paragraph.
>>> Refer v5 history [1] for same.
>>>
>>> [1] http://dpdk.org/dev/patchwork/patch/28419/
>>>
>>> there will be a case where mempool handler may have more than one memory example, no-hugepage case.
>>> In that case _register_memory_area() ops will be called for more than once.
>>>
>>> In v5, you suggested to mention this case explicitly in api description.
>>>
>>> If your not clear with write up then could you propose one and also are you fine
>>> with [8/8] patch beside above note? planning to send v7 by tomorrow, appreciate if you answer question.
>> Ping?
>>
>> IMO, remove above description keep it like:
>> "API to notify the mempool handler if a new memory area is added to pool." Is it ok with you? Can you pl. confirm, I need to send v7 and we want this series in -rc1, its blocking octeontx mempool and nw driver.. delayed review blocking progress.
> The proposed description is ok.
> I have no other comment for the rest of the patch.
>
Ok, will send v7 with above api description.
Thanks.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* [PATCH v7 0/8] Infrastructure to support octeontx HW mempool manager
  2017-09-07 15:30         ` [PATCH v6 0/8] Infrastructure to support octeontx HW mempool manager Santosh Shukla
                             ` (8 preceding siblings ...)
  2017-09-13  9:58           ` [PATCH v6 0/8] Infrastructure to support octeontx HW mempool manager santosh
@ 2017-10-01  9:28           ` Santosh Shukla
  2017-10-01  9:28             ` [PATCH v7 1/8] mempool: remove unused flags argument Santosh Shukla
                               ` (8 more replies)
  9 siblings, 9 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-10-01  9:28 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

v7:
Includes v6 minor review changes suggested by Olivier.
Patches are rebases on tip / upstream commit : 5dce9fcdb23

v6: 
Include v5 review change, suggested by Olivier.
Patches rebased on tip, commit:06791a4bcedf

v5:
Includes v4 review change, suggested by Olivier.

v4:
Include
- mempool deprecation changes, refer [1],
- patches are rebased against v17.11-rc0.

In order to support octeontx HW mempool manager, the common mempool layer must
meet below condition.
- Object start address should be block size (total elem size) aligned.
- Object must have the physically contiguous address within the pool.

And right now mempool doesn't support both.

Patchset adds infrastrucure to support both condition in a _generic_ way.
Proposed solution won't effect existing mempool drivers or its functionality.

Summary:
Introducing capability flag. Now mempool drivers can advertise their
capabilities to common mempool layer(at the pool creation time).
Handlers are introduced in order to support capability flag.

Flags:
* MEMPOOL_F_CAPA_PHYS_CONTIG - If flag is set then Detect whether the object
has the physically contiguous address with in a hugepage.

* MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS - If flag is set then make sure that object
addresses are block size aligned.

API:
Two handles are introduced:
* rte_mempool_ops_get_capabilities - advertise mempool manager capability.
* rte_mempool_ops_register_memory_area - Notify memory area (start/end addr) to 
					 HW mempool manager.

Change History:
v6 --> v7:
- Added mask (flag check var) in [07/08] (Suggested by Olivier)
- Incorporated comment nits changes in [08/08] (Suggested by Olivier).

v5 --> v6:
- Renamed flag from MEMPOOL_F_BLK_ALIGNED_OBJECTS to
  MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS and updated API description (Suggested by
  Olivier)
- Muxed _ALIGNED capability flag with _PHY_CONTIG per v5 thread [5].
- Renamed API from rte_mempool_ops_update_range to 
  rte_mempool_ops_register_memory_area (Suggested by Olivier)
- Upadted API description for FLAGS and API both (Suggested by Olivier).

Refer individual patch for detailed change history.


v4 --> v5:
- Replaced mp param with flags param in xmem_size/_usage() api. (Suggested by
  Olivier)
- Renamed flags from MEMPOOL_F_POOL_BLK_SZ_ALIGNED to
  MEMPOOL_F_BLK_ALIGNED_OBJECTS (suggested by Olivier)
- added flag param in get_capabilities() handle (suggested by Olivier)


v3 --> v4:
* [01 - 02 - 03/07] mempool deprecation notice changes.
* [04 - 05 - 06 - 07/07] are v3 patches.

v2 --> v3:
(Note: v3 work is based on deprecation notice [1], It's for 17.11)
* Changed _version.map from 17.08 to 17.11.
* build fixes reported by stv_sys.
* Patchset rebased on upstream commit: da94a999.


v1 --> v2 :
* [01/06] Per deprecation notice [1], Changed rte_mempool 'flag'
  data type from int to unsigned int and removed flag param
  from _xmem_size/usage api.
* [02/06] Incorporated review feedback from v1 [2] (Suggested by Olivier)
* [03/06] Renamed flag to MEMPOOL_F_CAPA_PHYS_CONTIG
  and comment reworded. (Suggested by Olivier per v1 [3])
* [04/06] added new mempool arg in xmem_size/usage. (Suggested by Olivier)
* [05/06] patch description changed.
        - Removed elseif brakcet mix
        - removed sanity check for alignment
        - removed extra var delta
        - Removed __rte_unused from xmem_usage/size and added _BLK_SZ_ALIGN check.
        (Suggeted by Olivier per v1[4])
* [06/06] Added RTE_FUNC_PTR_OR_RET in rte_mempool_ops_update_ops.

Checkpatch status:
CLEAN.

Thanks.

[1] deprecation notice v2: http://dpdk.org/dev/patchwork/patch/27079/
[2] v1: http://dpdk.org/dev/patchwork/patch/25603/
[3] v1: http://dpdk.org/dev/patchwork/patch/25604/
[4] v1: http://dpdk.org/dev/patchwork/patch/25605/
[5] v5: http://dpdk.org/dev/patchwork/patch/28418/

Santosh Shukla (8):
  mempool: remove unused flags argument
  mempool: change flags from int to unsigned int
  mempool: add flags arg in xmem size and usage
  doc: remove mempool notice
  mempool: get the mempool capability
  mempool: detect physical contiguous object in pool
  mempool: introduce block size align flag
  mempool: notify memory area to pool

 doc/guides/rel_notes/deprecation.rst       |   9 ---
 doc/guides/rel_notes/release_17_11.rst     |   7 ++
 drivers/net/xenvirt/rte_mempool_gntalloc.c |   7 +-
 lib/librte_mempool/rte_mempool.c           |  60 +++++++++++++--
 lib/librte_mempool/rte_mempool.h           | 116 ++++++++++++++++++++++-------
 lib/librte_mempool/rte_mempool_ops.c       |  29 ++++++++
 lib/librte_mempool/rte_mempool_version.map |   8 ++
 test/test/test_mempool.c                   |  25 ++++---
 test/test/test_mempool_perf.c              |   4 +-
 9 files changed, 207 insertions(+), 58 deletions(-)

-- 
2.14.1

^ permalink raw reply	[flat|nested] 116+ messages in thread

* [PATCH v7 1/8] mempool: remove unused flags argument
  2017-10-01  9:28           ` [PATCH v7 " Santosh Shukla
@ 2017-10-01  9:28             ` Santosh Shukla
  2017-10-01  9:28             ` [PATCH v7 2/8] mempool: change flags from int to unsigned int Santosh Shukla
                               ` (7 subsequent siblings)
  8 siblings, 0 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-10-01  9:28 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

* Remove redundant 'flags' API description from
  - __mempool_generic_put
  - __mempool_generic_get
  - rte_mempool_generic_put
  - rte_mempool_generic_get

* Remove unused 'flags' argument from
  - rte_mempool_generic_put
  - rte_mempool_generic_get

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 lib/librte_mempool/rte_mempool.h | 31 +++++++++----------------------
 test/test/test_mempool.c         | 18 +++++++++---------
 test/test/test_mempool_perf.c    |  4 ++--
 3 files changed, 20 insertions(+), 33 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 76b5b3b15..ec3884473 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -1034,13 +1034,10 @@ rte_mempool_default_cache(struct rte_mempool *mp, unsigned lcore_id)
  *   positive.
  * @param cache
  *   A pointer to a mempool cache structure. May be NULL if not needed.
- * @param flags
- *   The flags used for the mempool creation.
- *   Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers.
  */
 static __rte_always_inline void
 __mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
-		      unsigned n, struct rte_mempool_cache *cache)
+		      unsigned int n, struct rte_mempool_cache *cache)
 {
 	void **cache_objs;
 
@@ -1096,14 +1093,10 @@ __mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
  *   The number of objects to add in the mempool from the obj_table.
  * @param cache
  *   A pointer to a mempool cache structure. May be NULL if not needed.
- * @param flags
- *   The flags used for the mempool creation.
- *   Single-producer (MEMPOOL_F_SP_PUT flag) or multi-producers.
  */
 static __rte_always_inline void
 rte_mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
-			unsigned n, struct rte_mempool_cache *cache,
-			__rte_unused int flags)
+			unsigned int n, struct rte_mempool_cache *cache)
 {
 	__mempool_check_cookies(mp, obj_table, n, 0);
 	__mempool_generic_put(mp, obj_table, n, cache);
@@ -1125,11 +1118,11 @@ rte_mempool_generic_put(struct rte_mempool *mp, void * const *obj_table,
  */
 static __rte_always_inline void
 rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
-		     unsigned n)
+		     unsigned int n)
 {
 	struct rte_mempool_cache *cache;
 	cache = rte_mempool_default_cache(mp, rte_lcore_id());
-	rte_mempool_generic_put(mp, obj_table, n, cache, mp->flags);
+	rte_mempool_generic_put(mp, obj_table, n, cache);
 }
 
 /**
@@ -1160,16 +1153,13 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
  *   The number of objects to get, must be strictly positive.
  * @param cache
  *   A pointer to a mempool cache structure. May be NULL if not needed.
- * @param flags
- *   The flags used for the mempool creation.
- *   Single-consumer (MEMPOOL_F_SC_GET flag) or multi-consumers.
  * @return
  *   - >=0: Success; number of objects supplied.
  *   - <0: Error; code of ring dequeue function.
  */
 static __rte_always_inline int
 __mempool_generic_get(struct rte_mempool *mp, void **obj_table,
-		      unsigned n, struct rte_mempool_cache *cache)
+		      unsigned int n, struct rte_mempool_cache *cache)
 {
 	int ret;
 	uint32_t index, len;
@@ -1241,16 +1231,13 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table,
  *   The number of objects to get from mempool to obj_table.
  * @param cache
  *   A pointer to a mempool cache structure. May be NULL if not needed.
- * @param flags
- *   The flags used for the mempool creation.
- *   Single-consumer (MEMPOOL_F_SC_GET flag) or multi-consumers.
  * @return
  *   - 0: Success; objects taken.
  *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
  */
 static __rte_always_inline int
-rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n,
-			struct rte_mempool_cache *cache, __rte_unused int flags)
+rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table,
+			unsigned int n, struct rte_mempool_cache *cache)
 {
 	int ret;
 	ret = __mempool_generic_get(mp, obj_table, n, cache);
@@ -1282,11 +1269,11 @@ rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned n,
  *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
  */
 static __rte_always_inline int
-rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
+rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned int n)
 {
 	struct rte_mempool_cache *cache;
 	cache = rte_mempool_default_cache(mp, rte_lcore_id());
-	return rte_mempool_generic_get(mp, obj_table, n, cache, mp->flags);
+	return rte_mempool_generic_get(mp, obj_table, n, cache);
 }
 
 /**
diff --git a/test/test/test_mempool.c b/test/test/test_mempool.c
index 0a4423954..47dc3ac5f 100644
--- a/test/test/test_mempool.c
+++ b/test/test/test_mempool.c
@@ -129,7 +129,7 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 	rte_mempool_dump(stdout, mp);
 
 	printf("get an object\n");
-	if (rte_mempool_generic_get(mp, &obj, 1, cache, 0) < 0)
+	if (rte_mempool_generic_get(mp, &obj, 1, cache) < 0)
 		GOTO_ERR(ret, out);
 	rte_mempool_dump(stdout, mp);
 
@@ -152,21 +152,21 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 #endif
 
 	printf("put the object back\n");
-	rte_mempool_generic_put(mp, &obj, 1, cache, 0);
+	rte_mempool_generic_put(mp, &obj, 1, cache);
 	rte_mempool_dump(stdout, mp);
 
 	printf("get 2 objects\n");
-	if (rte_mempool_generic_get(mp, &obj, 1, cache, 0) < 0)
+	if (rte_mempool_generic_get(mp, &obj, 1, cache) < 0)
 		GOTO_ERR(ret, out);
-	if (rte_mempool_generic_get(mp, &obj2, 1, cache, 0) < 0) {
-		rte_mempool_generic_put(mp, &obj, 1, cache, 0);
+	if (rte_mempool_generic_get(mp, &obj2, 1, cache) < 0) {
+		rte_mempool_generic_put(mp, &obj, 1, cache);
 		GOTO_ERR(ret, out);
 	}
 	rte_mempool_dump(stdout, mp);
 
 	printf("put the objects back\n");
-	rte_mempool_generic_put(mp, &obj, 1, cache, 0);
-	rte_mempool_generic_put(mp, &obj2, 1, cache, 0);
+	rte_mempool_generic_put(mp, &obj, 1, cache);
+	rte_mempool_generic_put(mp, &obj2, 1, cache);
 	rte_mempool_dump(stdout, mp);
 
 	/*
@@ -178,7 +178,7 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 		GOTO_ERR(ret, out);
 
 	for (i = 0; i < MEMPOOL_SIZE; i++) {
-		if (rte_mempool_generic_get(mp, &objtable[i], 1, cache, 0) < 0)
+		if (rte_mempool_generic_get(mp, &objtable[i], 1, cache) < 0)
 			break;
 	}
 
@@ -200,7 +200,7 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache)
 				ret = -1;
 		}
 
-		rte_mempool_generic_put(mp, &objtable[i], 1, cache, 0);
+		rte_mempool_generic_put(mp, &objtable[i], 1, cache);
 	}
 
 	free(objtable);
diff --git a/test/test/test_mempool_perf.c b/test/test/test_mempool_perf.c
index 07b28c066..3b8f7de7c 100644
--- a/test/test/test_mempool_perf.c
+++ b/test/test/test_mempool_perf.c
@@ -186,7 +186,7 @@ per_lcore_mempool_test(void *arg)
 				ret = rte_mempool_generic_get(mp,
 							      &obj_table[idx],
 							      n_get_bulk,
-							      cache, 0);
+							      cache);
 				if (unlikely(ret < 0)) {
 					rte_mempool_dump(stdout, mp);
 					/* in this case, objects are lost... */
@@ -200,7 +200,7 @@ per_lcore_mempool_test(void *arg)
 			while (idx < n_keep) {
 				rte_mempool_generic_put(mp, &obj_table[idx],
 							n_put_bulk,
-							cache, 0);
+							cache);
 				idx += n_put_bulk;
 			}
 		}
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v7 2/8] mempool: change flags from int to unsigned int
  2017-10-01  9:28           ` [PATCH v7 " Santosh Shukla
  2017-10-01  9:28             ` [PATCH v7 1/8] mempool: remove unused flags argument Santosh Shukla
@ 2017-10-01  9:28             ` Santosh Shukla
  2017-10-01  9:28             ` [PATCH v7 3/8] mempool: add flags arg in xmem size and usage Santosh Shukla
                               ` (6 subsequent siblings)
  8 siblings, 0 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-10-01  9:28 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

mp->flags is int and mempool API writes unsigned int
value in 'flags', so fix the 'flags' data type.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 lib/librte_mempool/rte_mempool.c | 4 ++--
 lib/librte_mempool/rte_mempool.h | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 6fc3c9c7c..237665c65 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -515,7 +515,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 int
 rte_mempool_populate_default(struct rte_mempool *mp)
 {
-	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
+	unsigned int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	const struct rte_memzone *mz;
 	size_t size, total_elt_sz, align, pg_sz, pg_shift;
@@ -742,7 +742,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 	struct rte_tailq_entry *te = NULL;
 	const struct rte_memzone *mz = NULL;
 	size_t mempool_size;
-	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
+	unsigned int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
 	struct rte_mempool_objsz objsz;
 	unsigned lcore_id;
 	int ret;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index ec3884473..bf65d62fe 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -226,7 +226,7 @@ struct rte_mempool {
 	};
 	void *pool_config;               /**< optional args for ops alloc. */
 	const struct rte_memzone *mz;    /**< Memzone where pool is alloc'd. */
-	int flags;                       /**< Flags of the mempool. */
+	unsigned int flags;              /**< Flags of the mempool. */
 	int socket_id;                   /**< Socket id passed at create. */
 	uint32_t size;                   /**< Max size of the mempool. */
 	uint32_t cache_size;
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v7 3/8] mempool: add flags arg in xmem size and usage
  2017-10-01  9:28           ` [PATCH v7 " Santosh Shukla
  2017-10-01  9:28             ` [PATCH v7 1/8] mempool: remove unused flags argument Santosh Shukla
  2017-10-01  9:28             ` [PATCH v7 2/8] mempool: change flags from int to unsigned int Santosh Shukla
@ 2017-10-01  9:28             ` Santosh Shukla
  2017-10-01  9:28             ` [PATCH v7 4/8] doc: remove mempool notice Santosh Shukla
                               ` (5 subsequent siblings)
  8 siblings, 0 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-10-01  9:28 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

xmem_size and xmem_usage need to know the status of mempool flags,
so add 'flags' arg in _xmem_size/usage() api.

Following patch will make use of that.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 drivers/net/xenvirt/rte_mempool_gntalloc.c |  7 ++++---
 lib/librte_mempool/rte_mempool.c           | 11 +++++++----
 lib/librte_mempool/rte_mempool.h           |  8 ++++++--
 test/test/test_mempool.c                   |  7 ++++---
 4 files changed, 21 insertions(+), 12 deletions(-)

diff --git a/drivers/net/xenvirt/rte_mempool_gntalloc.c b/drivers/net/xenvirt/rte_mempool_gntalloc.c
index 73e82f808..7f7aecdc1 100644
--- a/drivers/net/xenvirt/rte_mempool_gntalloc.c
+++ b/drivers/net/xenvirt/rte_mempool_gntalloc.c
@@ -79,7 +79,7 @@ _create_mempool(const char *name, unsigned elt_num, unsigned elt_size,
 		   unsigned cache_size, unsigned private_data_size,
 		   rte_mempool_ctor_t *mp_init, void *mp_init_arg,
 		   rte_mempool_obj_cb_t *obj_init, void *obj_init_arg,
-		   int socket_id, unsigned flags)
+		   int socket_id, unsigned int flags)
 {
 	struct _mempool_gntalloc_info mgi;
 	struct rte_mempool *mp = NULL;
@@ -114,7 +114,7 @@ _create_mempool(const char *name, unsigned elt_num, unsigned elt_size,
 	pg_shift = rte_bsf32(pg_sz);
 
 	rte_mempool_calc_obj_size(elt_size, flags, &objsz);
-	sz = rte_mempool_xmem_size(elt_num, objsz.total_size, pg_shift);
+	sz = rte_mempool_xmem_size(elt_num, objsz.total_size, pg_shift, flags);
 	pg_num = sz >> pg_shift;
 
 	pa_arr = calloc(pg_num, sizeof(pa_arr[0]));
@@ -162,7 +162,8 @@ _create_mempool(const char *name, unsigned elt_num, unsigned elt_size,
 	 * Check that allocated size is big enough to hold elt_num
 	 * objects and a calcualte how many bytes are actually required.
 	 */
-	usz = rte_mempool_xmem_usage(va, elt_num, objsz.total_size, pa_arr, pg_num, pg_shift);
+	usz = rte_mempool_xmem_usage(va, elt_num, objsz.total_size, pa_arr,
+				     pg_num, pg_shift, flags);
 	if (usz < 0) {
 		mp = NULL;
 		i = pg_num;
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 237665c65..005240042 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -238,7 +238,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  * Calculate maximum amount of memory required to store given number of objects.
  */
 size_t
-rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift)
+rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
+		      __rte_unused unsigned int flags)
 {
 	size_t obj_per_page, pg_num, pg_sz;
 
@@ -264,7 +265,7 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift)
 ssize_t
 rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const phys_addr_t paddr[], uint32_t pg_num,
-	uint32_t pg_shift)
+	uint32_t pg_shift, __rte_unused unsigned int flags)
 {
 	uint32_t elt_cnt = 0;
 	phys_addr_t start, end;
@@ -543,7 +544,8 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 	for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) {
-		size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift);
+		size = rte_mempool_xmem_size(n, total_elt_sz, pg_shift,
+						mp->flags);
 
 		ret = snprintf(mz_name, sizeof(mz_name),
 			RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id);
@@ -600,7 +602,8 @@ get_anon_size(const struct rte_mempool *mp)
 	pg_sz = getpagesize();
 	pg_shift = rte_bsf32(pg_sz);
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
-	size = rte_mempool_xmem_size(mp->size, total_elt_sz, pg_shift);
+	size = rte_mempool_xmem_size(mp->size, total_elt_sz, pg_shift,
+					mp->flags);
 
 	return size;
 }
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index bf65d62fe..85eb770dc 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -1476,11 +1476,13 @@ uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  *   by rte_mempool_calc_obj_size().
  * @param pg_shift
  *   LOG2 of the physical pages size. If set to 0, ignore page boundaries.
+ * @param flags
+ *  The mempool flags.
  * @return
  *   Required memory size aligned at page boundary.
  */
 size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
-	uint32_t pg_shift);
+	uint32_t pg_shift, unsigned int flags);
 
 /**
  * Get the size of memory required to store mempool elements.
@@ -1503,6 +1505,8 @@ size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
  *   Number of elements in the paddr array.
  * @param pg_shift
  *   LOG2 of the physical pages size.
+ * @param flags
+ *  The mempool flags.
  * @return
  *   On success, the number of bytes needed to store given number of
  *   objects, aligned to the given page size. If the provided memory
@@ -1511,7 +1515,7 @@ size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz,
  */
 ssize_t rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const phys_addr_t paddr[], uint32_t pg_num,
-	uint32_t pg_shift);
+	uint32_t pg_shift, unsigned int flags);
 
 /**
  * Walk list of all memory pools
diff --git a/test/test/test_mempool.c b/test/test/test_mempool.c
index 47dc3ac5f..a225e1209 100644
--- a/test/test/test_mempool.c
+++ b/test/test/test_mempool.c
@@ -474,7 +474,7 @@ test_mempool_same_name_twice_creation(void)
 }
 
 /*
- * BAsic test for mempool_xmem functions.
+ * Basic test for mempool_xmem functions.
  */
 static int
 test_mempool_xmem_misc(void)
@@ -485,10 +485,11 @@ test_mempool_xmem_misc(void)
 
 	elt_num = MAX_KEEP;
 	total_size = rte_mempool_calc_obj_size(MEMPOOL_ELT_SIZE, 0, NULL);
-	sz = rte_mempool_xmem_size(elt_num, total_size, MEMPOOL_PG_SHIFT_MAX);
+	sz = rte_mempool_xmem_size(elt_num, total_size, MEMPOOL_PG_SHIFT_MAX,
+					0);
 
 	usz = rte_mempool_xmem_usage(NULL, elt_num, total_size, 0, 1,
-		MEMPOOL_PG_SHIFT_MAX);
+		MEMPOOL_PG_SHIFT_MAX, 0);
 
 	if (sz != (size_t)usz)  {
 		printf("failure @ %s: rte_mempool_xmem_usage(%u, %u) "
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v7 4/8] doc: remove mempool notice
  2017-10-01  9:28           ` [PATCH v7 " Santosh Shukla
                               ` (2 preceding siblings ...)
  2017-10-01  9:28             ` [PATCH v7 3/8] mempool: add flags arg in xmem size and usage Santosh Shukla
@ 2017-10-01  9:28             ` Santosh Shukla
  2017-10-01  9:28             ` [PATCH v7 5/8] mempool: get the mempool capability Santosh Shukla
                               ` (4 subsequent siblings)
  8 siblings, 0 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-10-01  9:28 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

Removed mempool deprecation notice and
updated change info in release_17.11.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 doc/guides/rel_notes/deprecation.rst   | 9 ---------
 doc/guides/rel_notes/release_17_11.rst | 7 +++++++
 2 files changed, 7 insertions(+), 9 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 3362f3350..0e4cb1f95 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -44,15 +44,6 @@ Deprecation Notices
   PKT_RX_QINQ_STRIPPED, that are better described. The old flags and
   their behavior will be kept until 17.08 and will be removed in 17.11.
 
-* mempool: The following will be modified in 17.11:
-
-  - ``rte_mempool_xmem_size`` and ``rte_mempool_xmem_usage`` need to know
-    the mempool flag status so adding new param rte_mempool in those API.
-  - Removing __rte_unused int flag param from ``rte_mempool_generic_put``
-    and ``rte_mempool_generic_get`` API.
-  - ``rte_mempool`` flags data type will changed from int to
-    unsigned int.
-
 * ethdev: Tx offloads will no longer be enabled by default in 17.11.
   Instead, the ``rte_eth_txmode`` structure will be extended with
   bit field to enable each Tx offload.
diff --git a/doc/guides/rel_notes/release_17_11.rst b/doc/guides/rel_notes/release_17_11.rst
index 8bf91bd40..2790a9505 100644
--- a/doc/guides/rel_notes/release_17_11.rst
+++ b/doc/guides/rel_notes/release_17_11.rst
@@ -103,6 +103,13 @@ Known Issues
    Also, make sure to start the actual text at the margin.
    =========================================================
 
+* **The following changes made in mempool library**
+
+  * Moved ``flags`` datatype from int to unsigned int for ``rte_mempool``.
+  * Removed ``__rte_unused int flag`` param from ``rte_mempool_generic_put``
+    and ``rte_mempool_generic_get`` API.
+  * Added ``flags`` param in ``rte_mempool_xmem_size`` and
+    ``rte_mempool_xmem_usage``.
 
 API Changes
 -----------
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v7 5/8] mempool: get the mempool capability
  2017-10-01  9:28           ` [PATCH v7 " Santosh Shukla
                               ` (3 preceding siblings ...)
  2017-10-01  9:28             ` [PATCH v7 4/8] doc: remove mempool notice Santosh Shukla
@ 2017-10-01  9:28             ` Santosh Shukla
  2017-10-01  9:29             ` [PATCH v7 6/8] mempool: detect physical contiguous object in pool Santosh Shukla
                               ` (3 subsequent siblings)
  8 siblings, 0 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-10-01  9:28 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

Allow the mempool driver to advertise his pool capabilities.
For that pupose, an api(rte_mempool_ops_get_capabilities)
and ->get_capabilities() handler has been introduced.
- Upon ->get_capabilities() call, mempool driver will advertise
his capabilities to mempool flags param.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 lib/librte_mempool/rte_mempool.c           | 13 +++++++++++++
 lib/librte_mempool/rte_mempool.h           | 27 +++++++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c       | 15 +++++++++++++++
 lib/librte_mempool/rte_mempool_version.map |  7 +++++++
 4 files changed, 62 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 005240042..92de39562 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -522,12 +522,25 @@ rte_mempool_populate_default(struct rte_mempool *mp)
 	size_t size, total_elt_sz, align, pg_sz, pg_shift;
 	phys_addr_t paddr;
 	unsigned mz_id, n;
+	unsigned int mp_flags;
 	int ret;
 
 	/* mempool must not be populated */
 	if (mp->nb_mem_chunks != 0)
 		return -EEXIST;
 
+	/* Get mempool capabilities */
+	mp_flags = 0;
+	ret = rte_mempool_ops_get_capabilities(mp, &mp_flags);
+	if (ret == -ENOTSUP)
+		RTE_LOG(DEBUG, MEMPOOL, "get_capability not supported for %s\n",
+					mp->name);
+	else if (ret < 0)
+		return ret;
+
+	/* update mempool capabilities */
+	mp->flags |= mp_flags;
+
 	if (rte_xen_dom0_supported()) {
 		pg_sz = RTE_PGSIZE_2M;
 		pg_shift = rte_bsf32(pg_sz);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 85eb770dc..d251d4255 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -389,6 +389,12 @@ typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
  */
 typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
 
+/**
+ * Get the mempool capabilities.
+ */
+typedef int (*rte_mempool_get_capabilities_t)(const struct rte_mempool *mp,
+		unsigned int *flags);
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -397,6 +403,10 @@ struct rte_mempool_ops {
 	rte_mempool_enqueue_t enqueue;   /**< Enqueue an object. */
 	rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */
 	rte_mempool_get_count get_count; /**< Get qty of available objs. */
+	/**
+	 * Get the mempool capabilities
+	 */
+	rte_mempool_get_capabilities_t get_capabilities;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -508,6 +518,23 @@ rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
 unsigned
 rte_mempool_ops_get_count(const struct rte_mempool *mp);
 
+/**
+ * @internal wrapper for mempool_ops get_capabilities callback.
+ *
+ * @param mp [in]
+ *   Pointer to the memory pool.
+ * @param flags [out]
+ *   Pointer to the mempool flags.
+ * @return
+ *   - 0: Success; The mempool driver has advertised his pool capabilities in
+ *   flags param.
+ *   - -ENOTSUP - doesn't support get_capabilities ops (valid case).
+ *   - Otherwise, pool create fails.
+ */
+int
+rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
+					unsigned int *flags);
+
 /**
  * @internal wrapper for mempool_ops free callback.
  *
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index 5f24de250..f2af5e5bb 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -37,6 +37,7 @@
 
 #include <rte_mempool.h>
 #include <rte_errno.h>
+#include <rte_dev.h>
 
 /* indirect jump table to support external memory pools. */
 struct rte_mempool_ops_table rte_mempool_ops_table = {
@@ -85,6 +86,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->enqueue = h->enqueue;
 	ops->dequeue = h->dequeue;
 	ops->get_count = h->get_count;
+	ops->get_capabilities = h->get_capabilities;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -123,6 +125,19 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp)
 	return ops->get_count(mp);
 }
 
+/* wrapper to get external mempool capabilities. */
+int
+rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
+					unsigned int *flags)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->get_capabilities, -ENOTSUP);
+	return ops->get_capabilities(mp, flags);
+}
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index f9c079447..3c3471507 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -41,3 +41,10 @@ DPDK_16.07 {
 	rte_mempool_set_ops_byname;
 
 } DPDK_2.0;
+
+DPDK_17.11 {
+	global:
+
+	rte_mempool_ops_get_capabilities;
+
+} DPDK_16.07;
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v7 6/8] mempool: detect physical contiguous object in pool
  2017-10-01  9:28           ` [PATCH v7 " Santosh Shukla
                               ` (4 preceding siblings ...)
  2017-10-01  9:28             ` [PATCH v7 5/8] mempool: get the mempool capability Santosh Shukla
@ 2017-10-01  9:29             ` Santosh Shukla
  2017-10-01  9:29             ` [PATCH v7 7/8] mempool: introduce block size align flag Santosh Shukla
                               ` (2 subsequent siblings)
  8 siblings, 0 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-10-01  9:29 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

The memory area containing all the objects must be physically
contiguous.
Introducing MEMPOOL_F_CAPA_PHYS_CONTIG flag for such use-case.

The flag useful to detect whether pool area has sufficient space
to fit all objects. If not then return -ENOSPC.
This way, we make sure that all object within a pool is contiguous.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 lib/librte_mempool/rte_mempool.c | 10 ++++++++++
 lib/librte_mempool/rte_mempool.h |  6 ++++++
 2 files changed, 16 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 92de39562..146e38675 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -369,6 +369,16 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 
+	/* Detect pool area has sufficient space for elements */
+	if (mp->flags & MEMPOOL_F_CAPA_PHYS_CONTIG) {
+		if (len < total_elt_sz * mp->size) {
+			RTE_LOG(ERR, MEMPOOL,
+				"pool area %" PRIx64 " not enough\n",
+				(uint64_t)len);
+			return -ENOSPC;
+		}
+	}
+
 	memhdr = rte_zmalloc("MEMPOOL_MEMHDR", sizeof(*memhdr), 0);
 	if (memhdr == NULL)
 		return -ENOMEM;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index d251d4255..734392556 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -265,6 +265,12 @@ struct rte_mempool {
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
+/**
+ * This capability flag is advertised by a mempool handler, if the whole
+ * memory area containing the objects must be physically contiguous.
+ * Note: This flag should not be passed by application.
+ */
+#define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040
 
 /**
  * @internal When debug is enabled, store some statistics.
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v7 7/8] mempool: introduce block size align flag
  2017-10-01  9:28           ` [PATCH v7 " Santosh Shukla
                               ` (5 preceding siblings ...)
  2017-10-01  9:29             ` [PATCH v7 6/8] mempool: detect physical contiguous object in pool Santosh Shukla
@ 2017-10-01  9:29             ` Santosh Shukla
  2017-10-02  8:35               ` santosh
  2017-10-02 14:26               ` Olivier MATZ
  2017-10-01  9:29             ` [PATCH v7 8/8] mempool: notify memory area to pool Santosh Shukla
  2017-10-06 20:00             ` [PATCH v7 0/8] Infrastructure to support octeontx HW mempool manager Thomas Monjalon
  8 siblings, 2 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-10-01  9:29 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

Some mempool hw like octeontx/fpa block, demands block size
(/total_elem_sz) aligned object start address.

Introducing an MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag.
If this flag is set:
- Align object start address(vaddr) to a multiple of total_elt_sz.
- Allocate one additional object. Additional object is needed to make
  sure that requested 'n' object gets correctly populated.

Example:
- Let's say that we get 'x' size of memory chunk from memzone.
- And application has requested 'n' object from mempool.
- Ideally, we start using objects at start address 0 to...(x-block_sz)
  for n obj.
- Not necessarily first object address i.e. 0 is aligned to block_sz.
- So we derive 'offset' value for block_sz alignment purpose i.e..'off'.
- That 'off' makes sure that start address of object is blk_sz aligned.
- Calculating 'off' may end up sacrificing first block_sz area of
  memzone area x. So total number of the object which can fit in the
  pool area is n-1, Which is incorrect behavior.

Therefore we request one additional object (/block_sz area) from memzone
when MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag is set.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
v6 --> v7:
- Added mask var (a flag checker) in xmem_size/usage() (suggested by Olivier)

Refer [1].
[1] http://dpdk.org/dev/patchwork/patch/28467/

v5 --> v6:
- Renamed from MEMPOOL_F_BLK_ALIGNED_OBJECTS to
MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS. (Suggested by Olivier)
- Updated Capability flag descrioption (Suggested by Olivier) 

History refer [2]
[2] http://dpdk.org/dev/patchwork/patch/28418/

v4 --> v5:
- Added vaddr in git description of patch (suggested by Olivier)
- Renamed to aligned flag to MEMPOOL_F_BLK_ALIGNED_OBJECTS (suggested by
    Olivier)
Refer [3].
[3] http://dpdk.org/dev/patchwork/patch/27600/

 lib/librte_mempool/rte_mempool.c | 21 ++++++++++++++++++---
 lib/librte_mempool/rte_mempool.h | 12 ++++++++++++
 2 files changed, 30 insertions(+), 3 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 146e38675..df9d67ae6 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -239,9 +239,15 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
  */
 size_t
 rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
-		      __rte_unused unsigned int flags)
+		      unsigned int flags)
 {
 	size_t obj_per_page, pg_num, pg_sz;
+	unsigned int mask;
+
+	mask = MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS | MEMPOOL_F_CAPA_PHYS_CONTIG;
+	if ((flags & mask) == mask)
+		/* alignment need one additional object */
+		elt_num += 1;
 
 	if (total_elt_sz == 0)
 		return 0;
@@ -265,12 +271,18 @@ rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift,
 ssize_t
 rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
 	size_t total_elt_sz, const phys_addr_t paddr[], uint32_t pg_num,
-	uint32_t pg_shift, __rte_unused unsigned int flags)
+	uint32_t pg_shift, unsigned int flags)
 {
 	uint32_t elt_cnt = 0;
 	phys_addr_t start, end;
 	uint32_t paddr_idx;
 	size_t pg_sz = (size_t)1 << pg_shift;
+	unsigned int mask;
+
+	mask = MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS | MEMPOOL_F_CAPA_PHYS_CONTIG;
+	if ((flags & mask) == mask)
+		/* alignment need one additional object */
+		elt_num += 1;
 
 	/* if paddr is NULL, assume contiguous memory */
 	if (paddr == NULL) {
@@ -390,7 +402,10 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	memhdr->free_cb = free_cb;
 	memhdr->opaque = opaque;
 
-	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+	if (mp->flags & MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS)
+		/* align object start address to a multiple of total_elt_sz */
+		off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz);
+	else if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
 		off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
 	else
 		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 734392556..24195dda0 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -271,6 +271,18 @@ struct rte_mempool {
  * Note: This flag should not be passed by application.
  */
 #define MEMPOOL_F_CAPA_PHYS_CONTIG 0x0040
+/**
+ * This capability flag is advertised by a mempool handler. Used for a case
+ * where mempool driver wants object start address(vaddr) aligned to block
+ * size(/ total element size).
+ *
+ * Note:
+ * - This flag should not be passed by application.
+ *   Flag used for mempool driver only.
+ * - Mempool driver must also set MEMPOOL_F_CAPA_PHYS_CONTIG flag along with
+ *   MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS.
+ */
+#define MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS 0x0080
 
 /**
  * @internal When debug is enabled, store some statistics.
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* [PATCH v7 8/8] mempool: notify memory area to pool
  2017-10-01  9:28           ` [PATCH v7 " Santosh Shukla
                               ` (6 preceding siblings ...)
  2017-10-01  9:29             ` [PATCH v7 7/8] mempool: introduce block size align flag Santosh Shukla
@ 2017-10-01  9:29             ` Santosh Shukla
  2017-10-02  8:36               ` santosh
  2017-10-02 14:27               ` Olivier MATZ
  2017-10-06 20:00             ` [PATCH v7 0/8] Infrastructure to support octeontx HW mempool manager Thomas Monjalon
  8 siblings, 2 replies; 116+ messages in thread
From: Santosh Shukla @ 2017-10-01  9:29 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla

HW pool manager e.g. Octeontx SoC demands s/w to program start and end
address of pool. Currently, there is no such api in external mempool.
Introducing rte_mempool_ops_register_memory_area api which will let HW(pool
manager) to know when common layer selects hugepage:
For each hugepage - Notify its start/end address to HW pool manager.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v6 --> v7:
- Incorporated comment nits suggested by Olivier.

Refer [1].
[1] http://dpdk.org/dev/patchwork/patch/28468/

v5 --> v6:
- Renamed from rte_mempool_ops_update_range to 
rte_mempool_ops_register_memory_area (Suggested by Olivier)
- Now renamed api retuns int (Suggestd by Olivier)
- Update API description and Error details very explicitly
  (Suggested by Olivier)
  Refer [2]
  [2] http://dpdk.org/dev/patchwork/patch/28419/

 lib/librte_mempool/rte_mempool.c           |  5 +++++
 lib/librte_mempool/rte_mempool.h           | 30 ++++++++++++++++++++++++++++++
 lib/librte_mempool/rte_mempool_ops.c       | 14 ++++++++++++++
 lib/librte_mempool/rte_mempool_version.map |  1 +
 4 files changed, 50 insertions(+)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index df9d67ae6..fb49a010d 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -367,6 +367,11 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
 	struct rte_mempool_memhdr *memhdr;
 	int ret;
 
+	/* Notfiy memory area to mempool */
+	ret = rte_mempool_ops_register_memory_area(mp, vaddr, paddr, len);
+	if (ret != -ENOTSUP && ret < 0)
+		return ret;
+
 	/* create the internal ring if not already done */
 	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
 		ret = rte_mempool_ops_alloc(mp);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 24195dda0..c69841ec4 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -413,6 +413,12 @@ typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
 typedef int (*rte_mempool_get_capabilities_t)(const struct rte_mempool *mp,
 		unsigned int *flags);
 
+/**
+ * Notify new memory area to mempool.
+ */
+typedef int (*rte_mempool_ops_register_memory_area_t)
+(const struct rte_mempool *mp, char *vaddr, phys_addr_t paddr, size_t len);
+
 /** Structure defining mempool operations structure */
 struct rte_mempool_ops {
 	char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
@@ -425,6 +431,10 @@ struct rte_mempool_ops {
 	 * Get the mempool capabilities
 	 */
 	rte_mempool_get_capabilities_t get_capabilities;
+	/**
+	 * Notify new memory area to mempool
+	 */
+	rte_mempool_ops_register_memory_area_t register_memory_area;
 } __rte_cache_aligned;
 
 #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */
@@ -552,6 +562,26 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp);
 int
 rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
 					unsigned int *flags);
+/**
+ * @internal wrapper for mempool_ops register_memory_area callback.
+ * API to notify the mempool handler when a new memory area is added to pool.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param vaddr
+ *   Pointer to the buffer virtual address.
+ * @param paddr
+ *   Pointer to the buffer physical address.
+ * @param len
+ *   Pool size.
+ * @return
+ *   - 0: Success;
+ *   - -ENOTSUP - doesn't support register_memory_area ops (valid error case).
+ *   - Otherwise, rte_mempool_populate_phys fails thus pool create fails.
+ */
+int
+rte_mempool_ops_register_memory_area(const struct rte_mempool *mp,
+				char *vaddr, phys_addr_t paddr, size_t len);
 
 /**
  * @internal wrapper for mempool_ops free callback.
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index f2af5e5bb..a6b5f2002 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -87,6 +87,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
 	ops->dequeue = h->dequeue;
 	ops->get_count = h->get_count;
 	ops->get_capabilities = h->get_capabilities;
+	ops->register_memory_area = h->register_memory_area;
 
 	rte_spinlock_unlock(&rte_mempool_ops_table.sl);
 
@@ -138,6 +139,19 @@ rte_mempool_ops_get_capabilities(const struct rte_mempool *mp,
 	return ops->get_capabilities(mp, flags);
 }
 
+/* wrapper to notify new memory area to external mempool */
+int
+rte_mempool_ops_register_memory_area(const struct rte_mempool *mp, char *vaddr,
+					phys_addr_t paddr, size_t len)
+{
+	struct rte_mempool_ops *ops;
+
+	ops = rte_mempool_get_ops(mp->ops_index);
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->register_memory_area, -ENOTSUP);
+	return ops->register_memory_area(mp, vaddr, paddr, len);
+}
+
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 3c3471507..ff86dc9a7 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -46,5 +46,6 @@ DPDK_17.11 {
 	global:
 
 	rte_mempool_ops_get_capabilities;
+	rte_mempool_ops_register_memory_area;
 
 } DPDK_16.07;
-- 
2.14.1

^ permalink raw reply related	[flat|nested] 116+ messages in thread

* Re: [PATCH v7 7/8] mempool: introduce block size align flag
  2017-10-01  9:29             ` [PATCH v7 7/8] mempool: introduce block size align flag Santosh Shukla
@ 2017-10-02  8:35               ` santosh
  2017-10-02 14:26               ` Olivier MATZ
  1 sibling, 0 replies; 116+ messages in thread
From: santosh @ 2017-10-02  8:35 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal

Hi Olivier,

On Sunday 01 October 2017 02:59 PM, Santosh Shukla wrote:
> Some mempool hw like octeontx/fpa block, demands block size
> (/total_elem_sz) aligned object start address.
>
> Introducing an MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag.
> If this flag is set:
> - Align object start address(vaddr) to a multiple of total_elt_sz.
> - Allocate one additional object. Additional object is needed to make
>   sure that requested 'n' object gets correctly populated.
>
> Example:
> - Let's say that we get 'x' size of memory chunk from memzone.
> - And application has requested 'n' object from mempool.
> - Ideally, we start using objects at start address 0 to...(x-block_sz)
>   for n obj.
> - Not necessarily first object address i.e. 0 is aligned to block_sz.
> - So we derive 'offset' value for block_sz alignment purpose i.e..'off'.
> - That 'off' makes sure that start address of object is blk_sz aligned.
> - Calculating 'off' may end up sacrificing first block_sz area of
>   memzone area x. So total number of the object which can fit in the
>   pool area is n-1, Which is incorrect behavior.
>
> Therefore we request one additional object (/block_sz area) from memzone
> when MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag is set.
>
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---

early ping, since we needed this -rc1! Thanks.

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v7 8/8] mempool: notify memory area to pool
  2017-10-01  9:29             ` [PATCH v7 8/8] mempool: notify memory area to pool Santosh Shukla
@ 2017-10-02  8:36               ` santosh
  2017-10-02 14:27               ` Olivier MATZ
  1 sibling, 0 replies; 116+ messages in thread
From: santosh @ 2017-10-02  8:36 UTC (permalink / raw)
  To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal

Hi Olivier,


On Sunday 01 October 2017 02:59 PM, Santosh Shukla wrote:
> HW pool manager e.g. Octeontx SoC demands s/w to program start and end
> address of pool. Currently, there is no such api in external mempool.
> Introducing rte_mempool_ops_register_memory_area api which will let HW(pool
> manager) to know when common layer selects hugepage:
> For each hugepage - Notify its start/end address to HW pool manager.
>
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> ---

ping, required for -rc1. Thanks.,

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v7 7/8] mempool: introduce block size align flag
  2017-10-01  9:29             ` [PATCH v7 7/8] mempool: introduce block size align flag Santosh Shukla
  2017-10-02  8:35               ` santosh
@ 2017-10-02 14:26               ` Olivier MATZ
  1 sibling, 0 replies; 116+ messages in thread
From: Olivier MATZ @ 2017-10-02 14:26 UTC (permalink / raw)
  To: Santosh Shukla; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Sun, Oct 01, 2017 at 02:59:01PM +0530, Santosh Shukla wrote:
> Some mempool hw like octeontx/fpa block, demands block size
> (/total_elem_sz) aligned object start address.
> 
> Introducing an MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag.
> If this flag is set:
> - Align object start address(vaddr) to a multiple of total_elt_sz.
> - Allocate one additional object. Additional object is needed to make
>   sure that requested 'n' object gets correctly populated.
> 
> Example:
> - Let's say that we get 'x' size of memory chunk from memzone.
> - And application has requested 'n' object from mempool.
> - Ideally, we start using objects at start address 0 to...(x-block_sz)
>   for n obj.
> - Not necessarily first object address i.e. 0 is aligned to block_sz.
> - So we derive 'offset' value for block_sz alignment purpose i.e..'off'.
> - That 'off' makes sure that start address of object is blk_sz aligned.
> - Calculating 'off' may end up sacrificing first block_sz area of
>   memzone area x. So total number of the object which can fit in the
>   pool area is n-1, Which is incorrect behavior.
> 
> Therefore we request one additional object (/block_sz area) from memzone
> when MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag is set.
> 
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>

Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v7 8/8] mempool: notify memory area to pool
  2017-10-01  9:29             ` [PATCH v7 8/8] mempool: notify memory area to pool Santosh Shukla
  2017-10-02  8:36               ` santosh
@ 2017-10-02 14:27               ` Olivier MATZ
  1 sibling, 0 replies; 116+ messages in thread
From: Olivier MATZ @ 2017-10-02 14:27 UTC (permalink / raw)
  To: Santosh Shukla; +Cc: dev, thomas, jerin.jacob, hemant.agrawal

On Sun, Oct 01, 2017 at 02:59:02PM +0530, Santosh Shukla wrote:
> HW pool manager e.g. Octeontx SoC demands s/w to program start and end
> address of pool. Currently, there is no such api in external mempool.
> Introducing rte_mempool_ops_register_memory_area api which will let HW(pool
> manager) to know when common layer selects hugepage:
> For each hugepage - Notify its start/end address to HW pool manager.
> 
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>

Acked-by: Olivier Matz <olivier.matz@6wind.com>

^ permalink raw reply	[flat|nested] 116+ messages in thread

* Re: [PATCH v7 0/8] Infrastructure to support octeontx HW mempool manager
  2017-10-01  9:28           ` [PATCH v7 " Santosh Shukla
                               ` (7 preceding siblings ...)
  2017-10-01  9:29             ` [PATCH v7 8/8] mempool: notify memory area to pool Santosh Shukla
@ 2017-10-06 20:00             ` Thomas Monjalon
  8 siblings, 0 replies; 116+ messages in thread
From: Thomas Monjalon @ 2017-10-06 20:00 UTC (permalink / raw)
  To: Santosh Shukla; +Cc: dev, olivier.matz, jerin.jacob, hemant.agrawal

> Santosh Shukla (8):
>   mempool: remove unused flags argument
>   mempool: change flags from int to unsigned int
>   mempool: add flags arg in xmem size and usage
>   doc: remove mempool notice
>   mempool: get the mempool capability
>   mempool: detect physical contiguous object in pool
>   mempool: introduce block size align flag
>   mempool: notify memory area to pool

Applied, thanks

^ permalink raw reply	[flat|nested] 116+ messages in thread

end of thread, other threads:[~2017-10-06 20:00 UTC | newest]

Thread overview: 116+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-21 17:32 [PATCH 0/4] Infrastructure to support octeontx HW mempool manager Santosh Shukla
2017-06-21 17:32 ` [PATCH 1/4] mempool: get the external mempool capability Santosh Shukla
2017-07-03 16:37   ` Olivier Matz
2017-07-05  6:41     ` santosh
2017-07-10 13:55       ` Olivier Matz
2017-07-10 16:09         ` santosh
2017-06-21 17:32 ` [PATCH 2/4] mempool: detect physical contiguous object in pool Santosh Shukla
2017-07-03 16:37   ` Olivier Matz
2017-07-05  7:07     ` santosh
2017-06-21 17:32 ` [PATCH 3/4] mempool: introduce block size align flag Santosh Shukla
2017-07-03 16:37   ` Olivier Matz
2017-07-05  7:35     ` santosh
2017-07-10 13:15       ` Olivier Matz
2017-07-10 16:22         ` santosh
2017-06-21 17:32 ` [PATCH 4/4] mempool: update range info to pool Santosh Shukla
2017-07-13  9:32 ` [PATCH v2 0/6] Infrastructure to support octeontx HW mempool manager Santosh Shukla
2017-07-13  9:32   ` [PATCH v2 1/6] mempool: fix flags data type Santosh Shukla
2017-07-13  9:32   ` [PATCH v2 2/6] mempool: get the mempool capability Santosh Shukla
2017-07-13  9:32   ` [PATCH v2 3/6] mempool: detect physical contiguous object in pool Santosh Shukla
2017-07-13  9:32   ` [PATCH v2 4/6] mempool: add mempool arg in xmem size and usage Santosh Shukla
2017-07-13  9:32   ` [PATCH v2 5/6] mempool: introduce block size align flag Santosh Shukla
2017-07-13  9:32   ` [PATCH v2 6/6] mempool: update range info to pool Santosh Shukla
2017-07-18  6:07   ` [PATCH v2 0/6] Infrastructure to support octeontx HW mempool manager santosh
2017-07-20 13:47   ` [PATCH v3 " Santosh Shukla
2017-07-20 13:47     ` [PATCH v3 1/6] mempool: fix flags data type Santosh Shukla
2017-07-20 13:47     ` [PATCH v3 2/6] mempool: get the mempool capability Santosh Shukla
2017-07-20 13:47     ` [PATCH v3 3/6] mempool: detect physical contiguous object in pool Santosh Shukla
2017-07-20 13:47     ` [PATCH v3 4/6] mempool: add mempool arg in xmem size and usage Santosh Shukla
2017-07-20 13:47     ` [PATCH v3 5/6] mempool: introduce block size align flag Santosh Shukla
2017-07-20 13:47     ` [PATCH v3 6/6] mempool: update range info to pool Santosh Shukla
2017-08-15  6:07     ` [PATCH v4 0/7] Infrastructure to support octeontx HW mempool manager Santosh Shukla
2017-08-15  6:07       ` [PATCH v4 1/7] mempool: fix flags data type Santosh Shukla
2017-09-04 14:11         ` Olivier MATZ
2017-09-04 14:18           ` santosh
2017-08-15  6:07       ` [PATCH v4 2/7] mempool: add mempool arg in xmem size and usage Santosh Shukla
2017-09-04 14:22         ` Olivier MATZ
2017-09-04 14:33           ` santosh
2017-09-04 14:46             ` Olivier MATZ
2017-09-04 14:58               ` santosh
2017-09-04 15:23                 ` Olivier MATZ
2017-09-04 15:52                   ` santosh
2017-08-15  6:07       ` [PATCH v4 3/7] doc: remove mempool api change notice Santosh Shukla
2017-08-15  6:07       ` [PATCH v4 4/7] mempool: get the mempool capability Santosh Shukla
2017-09-04 14:32         ` Olivier MATZ
2017-09-04 14:44           ` santosh
2017-09-04 15:56             ` Olivier MATZ
2017-09-04 16:29               ` santosh
2017-08-15  6:07       ` [PATCH v4 5/7] mempool: detect physical contiguous object in pool Santosh Shukla
2017-09-04 14:43         ` Olivier MATZ
2017-09-04 14:47           ` santosh
2017-09-04 16:00             ` Olivier MATZ
2017-08-15  6:07       ` [PATCH v4 6/7] mempool: introduce block size align flag Santosh Shukla
2017-09-04 16:20         ` Olivier MATZ
2017-09-04 17:45           ` santosh
2017-09-07  7:27             ` Olivier MATZ
2017-09-07  7:37               ` santosh
2017-08-15  6:07       ` [PATCH v4 7/7] mempool: update range info to pool Santosh Shukla
2017-09-06 11:28       ` [PATCH v5 0/8] Infrastructure to support octeontx HW mempool manager Santosh Shukla
2017-09-06 11:28         ` [PATCH v5 1/8] mempool: remove unused flags argument Santosh Shukla
2017-09-07  7:41           ` Olivier MATZ
2017-09-06 11:28         ` [PATCH v5 2/8] mempool: change flags from int to unsigned int Santosh Shukla
2017-09-07  7:43           ` Olivier MATZ
2017-09-06 11:28         ` [PATCH v5 3/8] mempool: add flags arg in xmem size and usage Santosh Shukla
2017-09-07  7:46           ` Olivier MATZ
2017-09-07  7:49             ` santosh
2017-09-06 11:28         ` [PATCH v5 4/8] doc: remove mempool notice Santosh Shukla
2017-09-07  7:47           ` Olivier MATZ
2017-09-06 11:28         ` [PATCH v5 5/8] mempool: get the mempool capability Santosh Shukla
2017-09-07  7:59           ` Olivier MATZ
2017-09-07  8:15             ` santosh
2017-09-07  8:39               ` Olivier MATZ
2017-09-06 11:28         ` [PATCH v5 6/8] mempool: detect physical contiguous object in pool Santosh Shukla
2017-09-07  8:05           ` Olivier MATZ
2017-09-06 11:28         ` [PATCH v5 7/8] mempool: introduce block size align flag Santosh Shukla
2017-09-07  8:13           ` Olivier MATZ
2017-09-07  8:27             ` santosh
2017-09-07  8:57               ` Olivier MATZ
2017-09-06 11:28         ` [PATCH v5 8/8] mempool: update range info to pool Santosh Shukla
2017-09-07  8:30           ` Olivier MATZ
2017-09-07  8:56             ` santosh
2017-09-07  9:09               ` Olivier MATZ
2017-09-07 15:30         ` [PATCH v6 0/8] Infrastructure to support octeontx HW mempool manager Santosh Shukla
2017-09-07 15:30           ` [PATCH v6 1/8] mempool: remove unused flags argument Santosh Shukla
2017-09-07 15:30           ` [PATCH v6 2/8] mempool: change flags from int to unsigned int Santosh Shukla
2017-09-07 15:30           ` [PATCH v6 3/8] mempool: add flags arg in xmem size and usage Santosh Shukla
2017-09-25 11:24             ` Olivier MATZ
2017-09-07 15:30           ` [PATCH v6 4/8] doc: remove mempool notice Santosh Shukla
2017-09-07 15:30           ` [PATCH v6 5/8] mempool: get the mempool capability Santosh Shukla
2017-09-25 11:26             ` Olivier MATZ
2017-09-07 15:30           ` [PATCH v6 6/8] mempool: detect physical contiguous object in pool Santosh Shukla
2017-09-07 15:30           ` [PATCH v6 7/8] mempool: introduce block size align flag Santosh Shukla
2017-09-22 12:59             ` Hemant Agrawal
2017-09-25 11:32             ` Olivier MATZ
2017-09-25 22:08               ` santosh
2017-09-07 15:30           ` [PATCH v6 8/8] mempool: notify memory area to pool Santosh Shukla
2017-09-25 11:41             ` Olivier MATZ
2017-09-25 22:18               ` santosh
2017-09-29  4:53                 ` santosh
2017-09-29  8:20                   ` Olivier MATZ
2017-09-29  8:25                     ` santosh
2017-09-13  9:58           ` [PATCH v6 0/8] Infrastructure to support octeontx HW mempool manager santosh
2017-09-19  8:26             ` santosh
2017-10-01  9:28           ` [PATCH v7 " Santosh Shukla
2017-10-01  9:28             ` [PATCH v7 1/8] mempool: remove unused flags argument Santosh Shukla
2017-10-01  9:28             ` [PATCH v7 2/8] mempool: change flags from int to unsigned int Santosh Shukla
2017-10-01  9:28             ` [PATCH v7 3/8] mempool: add flags arg in xmem size and usage Santosh Shukla
2017-10-01  9:28             ` [PATCH v7 4/8] doc: remove mempool notice Santosh Shukla
2017-10-01  9:28             ` [PATCH v7 5/8] mempool: get the mempool capability Santosh Shukla
2017-10-01  9:29             ` [PATCH v7 6/8] mempool: detect physical contiguous object in pool Santosh Shukla
2017-10-01  9:29             ` [PATCH v7 7/8] mempool: introduce block size align flag Santosh Shukla
2017-10-02  8:35               ` santosh
2017-10-02 14:26               ` Olivier MATZ
2017-10-01  9:29             ` [PATCH v7 8/8] mempool: notify memory area to pool Santosh Shukla
2017-10-02  8:36               ` santosh
2017-10-02 14:27               ` Olivier MATZ
2017-10-06 20:00             ` [PATCH v7 0/8] Infrastructure to support octeontx HW mempool manager Thomas Monjalon

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.