All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC 0/7] changing mbuf pool handler
@ 2016-09-19 13:42 Olivier Matz
  2016-09-19 13:42 ` [RFC 1/7] mbuf: set the handler at mbuf pool creation Olivier Matz
                   ` (7 more replies)
  0 siblings, 8 replies; 15+ messages in thread
From: Olivier Matz @ 2016-09-19 13:42 UTC (permalink / raw)
  To: dev; +Cc: jerin.jacob, hemant.agrawal, david.hunt

Hello,

Following discussion from [1] ("usages issue with external mempool").

This is a tentative to make the mempool_ops feature introduced
by David Hunt [2] more widely used by applications.

It applies on top of a minor fix in mbuf lib [3].

To sumarize the needs (please comment if I did not got it properly):

- new hw-assisted mempool handlers will soon be introduced
- to make use of it, the new mempool API [4] (rte_mempool_create_empty,
  rte_mempool_populate, ...) has to be used
- the legacy mempool API (rte_mempool_create) does not allow to change
  the mempool ops. The default is "ring_<s|m>p_<s|m>c" depending on
  flags.
- the mbuf helper (rte_pktmbuf_pool_create) does not allow to change
  them either, and the default is RTE_MBUF_DEFAULT_MEMPOOL_OPS
  ("ring_mp_mc")
- today, most (if not all) applications and examples use either
  rte_pktmbuf_pool_create or rte_mempool_create to create the mbuf
  pool, making it difficult to take advantage of this feature with
  existing apps.

My initial idea was to deprecate both rte_pktmbuf_pool_create() and
rte_mempool_create(), forcing the applications to use the new API, which
is more flexible. But after digging a bit, it appeared that
rte_mempool_create() is widely used, and not only for mbufs. Deprecating
it would have a big impact on applications, and replacing it with the
new API would be overkill in many use-cases.

So I finally tried the following approach (inspired from a suggestion
Jerin [5]):

- add a new mempool_ops parameter to rte_pktmbuf_pool_create(). This
  unfortunatelly breaks the API, but I implemented an ABI compat layer.
  If the patch is accepted, we could discuss how to announce/schedule
  the API change.
- update the applications and documentation to prefer
  rte_pktmbuf_pool_create() as much as possible
- update most used examples (testpmd, l2fwd, l3fwd) to add a new command
  line argument to select the mempool handler

I hope the external applications would then switch to
rte_pktmbuf_pool_create(), since it supports most of the use-cases (even
priv_size != 0, since we can call rte_mempool_obj_iter() after) .

Comments are of course welcome. Note: the patchset is not really
tested yet.


Thanks,
Olivier

[1] http://dpdk.org/ml/archives/dev/2016-July/044734.html
[2] http://dpdk.org/ml/archives/dev/2016-June/042423.html
[3] http://www.dpdk.org/dev/patchwork/patch/15923/
[4] http://dpdk.org/ml/archives/dev/2016-May/039229.html
[5] http://dpdk.org/ml/archives/dev/2016-July/044779.html


Olivier Matz (7):
  mbuf: set the handler at mbuf pool creation
  mbuf: use helper to create the pool
  testpmd: new parameter to set mbuf pool ops
  l3fwd: rework long options parsing
  l3fwd: new parameter to set mbuf pool ops
  l2fwd: rework long options parsing
  l2fwd: new parameter to set mbuf pool ops

 app/pdump/main.c                                   |   2 +-
 app/test-pipeline/init.c                           |   3 +-
 app/test-pmd/parameters.c                          |   5 +
 app/test-pmd/testpmd.c                             |  16 +-
 app/test-pmd/testpmd.h                             |   1 +
 app/test/test_cryptodev.c                          |   2 +-
 app/test/test_cryptodev_perf.c                     |   2 +-
 app/test/test_distributor.c                        |   2 +-
 app/test/test_distributor_perf.c                   |   2 +-
 app/test/test_kni.c                                |   2 +-
 app/test/test_link_bonding.c                       |   2 +-
 app/test/test_link_bonding_mode4.c                 |   2 +-
 app/test/test_link_bonding_rssconf.c               |  11 +-
 app/test/test_mbuf.c                               |   6 +-
 app/test/test_pmd_perf.c                           |   3 +-
 app/test/test_pmd_ring.c                           |   2 +-
 app/test/test_reorder.c                            |   2 +-
 app/test/test_sched.c                              |   2 +-
 app/test/test_table.c                              |   2 +-
 doc/guides/prog_guide/mbuf_lib.rst                 |   2 +-
 doc/guides/sample_app_ug/ip_reassembly.rst         |  13 +-
 doc/guides/sample_app_ug/ipv4_multicast.rst        |  12 +-
 doc/guides/sample_app_ug/l2_forward_job_stats.rst  |  33 ++--
 .../sample_app_ug/l2_forward_real_virtual.rst      |  26 ++-
 doc/guides/sample_app_ug/ptpclient.rst             |  12 +-
 doc/guides/sample_app_ug/quota_watermark.rst       |  26 ++-
 drivers/net/bonding/rte_eth_bond_8023ad.c          |  13 +-
 drivers/net/bonding/rte_eth_bond_alb.c             |   2 +-
 examples/bond/main.c                               |   2 +-
 examples/distributor/main.c                        |   2 +-
 examples/dpdk_qat/main.c                           |   3 +-
 examples/ethtool/ethtool-app/main.c                |   4 +-
 examples/exception_path/main.c                     |   3 +-
 examples/ip_fragmentation/main.c                   |   4 +-
 examples/ip_pipeline/init.c                        |  19 ++-
 examples/ip_reassembly/main.c                      |  16 +-
 examples/ipsec-secgw/ipsec-secgw.c                 |   2 +-
 examples/ipv4_multicast/main.c                     |   6 +-
 examples/kni/main.c                                |   2 +-
 examples/l2fwd-cat/l2fwd-cat.c                     |   3 +-
 examples/l2fwd-crypto/main.c                       |   2 +-
 examples/l2fwd-jobstats/main.c                     |   2 +-
 examples/l2fwd-keepalive/main.c                    |   2 +-
 examples/l2fwd/main.c                              |  36 ++++-
 examples/l3fwd-acl/main.c                          |   2 +-
 examples/l3fwd-power/main.c                        |   2 +-
 examples/l3fwd-vf/main.c                           |   2 +-
 examples/l3fwd/main.c                              | 180 +++++++++++----------
 examples/link_status_interrupt/main.c              |   2 +-
 examples/load_balancer/init.c                      |   2 +-
 .../client_server_mp/mp_server/init.c              |   3 +-
 examples/multi_process/l2fwd_fork/main.c           |  14 +-
 examples/multi_process/symmetric_mp/main.c         |   2 +-
 examples/netmap_compat/bridge/bridge.c             |   2 +-
 examples/packet_ordering/main.c                    |   2 +-
 examples/performance-thread/l3fwd-thread/main.c    |   2 +-
 examples/ptpclient/ptpclient.c                     |   3 +-
 examples/qos_meter/main.c                          |   2 +-
 examples/qos_sched/init.c                          |   2 +-
 examples/quota_watermark/qw/main.c                 |   2 +-
 examples/rxtx_callbacks/main.c                     |   2 +-
 examples/skeleton/basicfwd.c                       |   3 +-
 examples/tep_termination/main.c                    |  17 +-
 examples/vhost/main.c                              |   2 +-
 examples/vhost_xen/main.c                          |   2 +-
 examples/vmdq/main.c                               |   2 +-
 examples/vmdq_dcb/main.c                           |   2 +-
 lib/librte_mbuf/rte_mbuf.c                         |  34 +++-
 lib/librte_mbuf/rte_mbuf.h                         |  44 +++--
 lib/librte_mbuf/rte_mbuf_version.map               |   7 +
 70 files changed, 366 insertions(+), 289 deletions(-)

-- 
2.8.1

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [RFC 1/7] mbuf: set the handler at mbuf pool creation
  2016-09-19 13:42 [RFC 0/7] changing mbuf pool handler Olivier Matz
@ 2016-09-19 13:42 ` Olivier Matz
  2016-09-19 13:42 ` [RFC 2/7] mbuf: use helper to create the pool Olivier Matz
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 15+ messages in thread
From: Olivier Matz @ 2016-09-19 13:42 UTC (permalink / raw)
  To: dev; +Cc: jerin.jacob, hemant.agrawal, david.hunt

Add a new argument to rte_pktmbuf_pool_create() in order to specify the
mempool ops (handler) to use. By default, if set to NULL, the default
pool handler for mbuf will be selected (RTE_MBUF_DEFAULT_MEMPOOL_OPS).

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/pdump/main.c                                   |  2 +-
 app/test-pipeline/init.c                           |  3 ++-
 app/test-pmd/testpmd.c                             |  3 ++-
 app/test/test_cryptodev.c                          |  2 +-
 app/test/test_cryptodev_perf.c                     |  2 +-
 app/test/test_distributor.c                        |  2 +-
 app/test/test_distributor_perf.c                   |  2 +-
 app/test/test_kni.c                                |  2 +-
 app/test/test_link_bonding.c                       |  2 +-
 app/test/test_link_bonding_mode4.c                 |  2 +-
 app/test/test_mbuf.c                               |  6 ++---
 app/test/test_pmd_perf.c                           |  3 ++-
 app/test/test_pmd_ring.c                           |  2 +-
 app/test/test_reorder.c                            |  2 +-
 app/test/test_sched.c                              |  2 +-
 app/test/test_table.c                              |  2 +-
 drivers/net/bonding/rte_eth_bond_alb.c             |  2 +-
 examples/bond/main.c                               |  2 +-
 examples/distributor/main.c                        |  2 +-
 examples/dpdk_qat/main.c                           |  3 ++-
 examples/ethtool/ethtool-app/main.c                |  4 ++--
 examples/exception_path/main.c                     |  3 ++-
 examples/ip_fragmentation/main.c                   |  4 ++--
 examples/ipsec-secgw/ipsec-secgw.c                 |  2 +-
 examples/ipv4_multicast/main.c                     |  6 ++---
 examples/kni/main.c                                |  2 +-
 examples/l2fwd-cat/l2fwd-cat.c                     |  3 ++-
 examples/l2fwd-crypto/main.c                       |  2 +-
 examples/l2fwd-jobstats/main.c                     |  2 +-
 examples/l2fwd-keepalive/main.c                    |  2 +-
 examples/l2fwd/main.c                              |  2 +-
 examples/l3fwd-acl/main.c                          |  2 +-
 examples/l3fwd-power/main.c                        |  2 +-
 examples/l3fwd-vf/main.c                           |  2 +-
 examples/l3fwd/main.c                              |  3 ++-
 examples/link_status_interrupt/main.c              |  2 +-
 examples/load_balancer/init.c                      |  2 +-
 .../client_server_mp/mp_server/init.c              |  3 ++-
 examples/multi_process/symmetric_mp/main.c         |  2 +-
 examples/netmap_compat/bridge/bridge.c             |  2 +-
 examples/packet_ordering/main.c                    |  2 +-
 examples/performance-thread/l3fwd-thread/main.c    |  2 +-
 examples/ptpclient/ptpclient.c                     |  3 ++-
 examples/qos_meter/main.c                          |  2 +-
 examples/qos_sched/init.c                          |  2 +-
 examples/quota_watermark/qw/main.c                 |  2 +-
 examples/rxtx_callbacks/main.c                     |  2 +-
 examples/skeleton/basicfwd.c                       |  3 ++-
 examples/vhost/main.c                              |  2 +-
 examples/vhost_xen/main.c                          |  2 +-
 examples/vmdq/main.c                               |  2 +-
 examples/vmdq_dcb/main.c                           |  2 +-
 lib/librte_mbuf/rte_mbuf.c                         | 27 ++++++++++++++++++----
 lib/librte_mbuf/rte_mbuf.h                         | 15 ++++++++++++
 lib/librte_mbuf/rte_mbuf_version.map               |  7 ++++++
 55 files changed, 113 insertions(+), 62 deletions(-)

diff --git a/app/pdump/main.c b/app/pdump/main.c
index f3ef181..d3b83aa 100644
--- a/app/pdump/main.c
+++ b/app/pdump/main.c
@@ -641,7 +641,7 @@ create_mp_ring_vdev(void)
 					pt->total_num_mbufs,
 					MBUF_POOL_CACHE_SIZE, 0,
 					pt->mbuf_data_size,
-					rte_socket_id());
+					rte_socket_id(), NULL);
 			if (mbuf_pool == NULL) {
 				cleanup_rings();
 				rte_exit(EXIT_FAILURE,
diff --git a/app/test-pipeline/init.c b/app/test-pipeline/init.c
index aef082f..d2dd190 100644
--- a/app/test-pipeline/init.c
+++ b/app/test-pipeline/init.c
@@ -144,7 +144,8 @@ app_init_mbuf_pools(void)
 	/* Init the buffer pool */
 	RTE_LOG(INFO, USER1, "Creating the mbuf pool ...\n");
 	app.pool = rte_pktmbuf_pool_create("mempool", app.pool_size,
-		app.pool_cache_size, 0, app.pool_buffer_size, rte_socket_id());
+		app.pool_cache_size, 0, app.pool_buffer_size, rte_socket_id(),
+		NULL);
 	if (app.pool == NULL)
 		rte_panic("Cannot create mbuf pool\n");
 }
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 30749a4..cc3d2d0 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -453,7 +453,8 @@ mbuf_pool_create(uint16_t mbuf_seg_size, unsigned nb_mbuf,
 		} else {
 			/* wrapper to rte_mempool_create() */
 			rte_mp = rte_pktmbuf_pool_create(pool_name, nb_mbuf,
-				mb_mempool_cache, 0, mbuf_seg_size, socket_id);
+				mb_mempool_cache, 0, mbuf_seg_size, socket_id,
+				NULL);
 		}
 	}
 
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 647787d..b36116f 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -160,7 +160,7 @@ testsuite_setup(void)
 		ts_params->mbuf_pool = rte_pktmbuf_pool_create(
 				"CRYPTO_MBUFPOOL",
 				NUM_MBUFS, MBUF_CACHE_SIZE, 0, MBUF_SIZE,
-				rte_socket_id());
+				rte_socket_id(), NULL);
 		if (ts_params->mbuf_pool == NULL) {
 			RTE_LOG(ERR, USER1, "Can't create CRYPTO_MBUFPOOL\n");
 			return TEST_FAILED;
diff --git a/app/test/test_cryptodev_perf.c b/app/test/test_cryptodev_perf.c
index 2398d84..b77b09b 100644
--- a/app/test/test_cryptodev_perf.c
+++ b/app/test/test_cryptodev_perf.c
@@ -224,7 +224,7 @@ testsuite_setup(void)
 		ts_params->mbuf_mp = rte_pktmbuf_pool_create(
 				"CRYPTO_PERF_MBUFPOOL",
 				NUM_MBUFS, MBUF_CACHE_SIZE, 0, MBUF_SIZE,
-				rte_socket_id());
+				rte_socket_id(), NULL);
 		if (ts_params->mbuf_mp == NULL) {
 			RTE_LOG(ERR, USER1, "Can't create CRYPTO_PERF_MBUFPOOL\n");
 			return TEST_FAILED;
diff --git a/app/test/test_distributor.c b/app/test/test_distributor.c
index 85cb8f3..a2bacb1 100644
--- a/app/test/test_distributor.c
+++ b/app/test/test_distributor.c
@@ -529,7 +529,7 @@ test_distributor(void)
 			(BIG_BATCH * 2) - 1 : (511 * rte_lcore_count());
 	if (p == NULL) {
 		p = rte_pktmbuf_pool_create("DT_MBUF_POOL", nb_bufs, BURST,
-			0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+			0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id(), NULL);
 		if (p == NULL) {
 			printf("Error creating mempool\n");
 			return -1;
diff --git a/app/test/test_distributor_perf.c b/app/test/test_distributor_perf.c
index 7947fe9..3a0755b 100644
--- a/app/test/test_distributor_perf.c
+++ b/app/test/test_distributor_perf.c
@@ -242,7 +242,7 @@ test_distributor_perf(void)
 			(BIG_BATCH * 2) - 1 : (511 * rte_lcore_count());
 	if (p == NULL) {
 		p = rte_pktmbuf_pool_create("DPT_MBUF_POOL", nb_bufs, BURST,
-			0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+			0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id(), NULL);
 		if (p == NULL) {
 			printf("Error creating mempool\n");
 			return -1;
diff --git a/app/test/test_kni.c b/app/test/test_kni.c
index 309741c..6d25ffd 100644
--- a/app/test/test_kni.c
+++ b/app/test/test_kni.c
@@ -120,7 +120,7 @@ test_kni_create_mempool(void)
 		mp = rte_pktmbuf_pool_create("kni_mempool",
 				NB_MBUF,
 				MEMPOOL_CACHE_SZ, 0, MBUF_DATA_SZ,
-				SOCKET);
+				SOCKET, NULL);
 
 	return mp;
 }
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 3229660..9f864a5 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -279,7 +279,7 @@ test_setup(void)
 	if (test_params->mbuf_pool == NULL) {
 		test_params->mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL",
 			nb_mbuf_per_pool, MBUF_CACHE_SIZE, 0,
-			RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+			RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id(), NULL);
 		TEST_ASSERT_NOT_NULL(test_params->mbuf_pool,
 				"rte_mempool_create failed");
 	}
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 53caa3e..8dc23c9 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -416,7 +416,7 @@ test_setup(void)
 					TEST_TX_DESC_MAX + MAX_PKT_BURST;
 		test_params.mbuf_pool = rte_pktmbuf_pool_create("TEST_MODE4",
 			nb_mbuf_per_pool, MBUF_CACHE_SIZE, 0,
-			RTE_MBUF_DEFAULT_BUF_SIZE, socket_id);
+			RTE_MBUF_DEFAULT_BUF_SIZE, socket_id, NULL);
 
 		TEST_ASSERT(test_params.mbuf_pool != NULL,
 			"rte_mempool_create failed\n");
diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
index c0823ea..016f621 100644
--- a/app/test/test_mbuf.c
+++ b/app/test/test_mbuf.c
@@ -801,7 +801,7 @@ test_refcnt_mbuf(void)
 			(refcnt_pool = rte_pktmbuf_pool_create(
 				MAKE_STRING(refcnt_pool),
 				REFCNT_MBUF_NUM, 0, 0, 0,
-				SOCKET_ID_ANY)) == NULL) {
+				SOCKET_ID_ANY, NULL)) == NULL) {
 		printf("%s: cannot allocate " MAKE_STRING(refcnt_pool) "\n",
 		    __func__);
 		return -1;
@@ -939,7 +939,7 @@ test_mbuf(void)
 	/* create pktmbuf pool if it does not exist */
 	if (pktmbuf_pool == NULL) {
 		pktmbuf_pool = rte_pktmbuf_pool_create("test_pktmbuf_pool",
-			NB_MBUF, 32, 0, MBUF_DATA_SIZE, SOCKET_ID_ANY);
+			NB_MBUF, 32, 0, MBUF_DATA_SIZE, SOCKET_ID_ANY, NULL);
 	}
 
 	if (pktmbuf_pool == NULL) {
@@ -951,7 +951,7 @@ test_mbuf(void)
 	 * room size */
 	if (pktmbuf_pool2 == NULL) {
 		pktmbuf_pool2 = rte_pktmbuf_pool_create("test_pktmbuf_pool2",
-			NB_MBUF, 32, MBUF2_PRIV_SIZE, 0, SOCKET_ID_ANY);
+			NB_MBUF, 32, MBUF2_PRIV_SIZE, 0, SOCKET_ID_ANY, NULL);
 	}
 
 	if (pktmbuf_pool2 == NULL) {
diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
index e055aa0..8b6b629 100644
--- a/app/test/test_pmd_perf.c
+++ b/app/test/test_pmd_perf.c
@@ -290,7 +290,8 @@ init_mbufpool(unsigned nb_mbuf)
 			mbufpool[socketid] =
 				rte_pktmbuf_pool_create(s, nb_mbuf,
 					MEMPOOL_CACHE_SIZE, 0,
-					RTE_MBUF_DEFAULT_BUF_SIZE, socketid);
+					RTE_MBUF_DEFAULT_BUF_SIZE, socketid,
+					NULL);
 			if (mbufpool[socketid] == NULL)
 				rte_exit(EXIT_FAILURE,
 					"Cannot init mbuf pool on socket %d\n",
diff --git a/app/test/test_pmd_ring.c b/app/test/test_pmd_ring.c
index 47374db..f92c1d4 100644
--- a/app/test/test_pmd_ring.c
+++ b/app/test/test_pmd_ring.c
@@ -464,7 +464,7 @@ test_pmd_ring(void)
 	}
 
 	mp = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, 32,
-		0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+		0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id(), NULL);
 	if (mp == NULL)
 		return -1;
 
diff --git a/app/test/test_reorder.c b/app/test/test_reorder.c
index e8a0a2f..79e2bb6 100644
--- a/app/test/test_reorder.c
+++ b/app/test/test_reorder.c
@@ -353,7 +353,7 @@ test_setup(void)
 	if (test_params->p == NULL) {
 		test_params->p = rte_pktmbuf_pool_create("RO_MBUF_POOL",
 			NUM_MBUFS, BURST, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
-			rte_socket_id());
+			rte_socket_id(), NULL);
 		if (test_params->p == NULL) {
 			printf("%s: Error creating mempool\n", __func__);
 			return -1;
diff --git a/app/test/test_sched.c b/app/test/test_sched.c
index 63ab084..8e237f9 100644
--- a/app/test/test_sched.c
+++ b/app/test/test_sched.c
@@ -99,7 +99,7 @@ create_mempool(void)
 	mp = rte_mempool_lookup("test_sched");
 	if (!mp)
 		mp = rte_pktmbuf_pool_create("test_sched", NB_MBUF,
-			MEMPOOL_CACHE_SZ, 0, MBUF_DATA_SZ, SOCKET);
+			MEMPOOL_CACHE_SZ, 0, MBUF_DATA_SZ, SOCKET, NULL);
 
 	return mp;
 }
diff --git a/app/test/test_table.c b/app/test/test_table.c
index 1faa0a6..f2a9976 100644
--- a/app/test/test_table.c
+++ b/app/test/test_table.c
@@ -93,7 +93,7 @@ app_init_mbuf_pools(void)
 			"mempool",
 			POOL_SIZE,
 			POOL_CACHE_SIZE, 0, POOL_BUFFER_SIZE,
-			0);
+			0, NULL);
 		if (pool == NULL)
 			rte_panic("Cannot create mbuf pool\n");
 	}
diff --git a/drivers/net/bonding/rte_eth_bond_alb.c b/drivers/net/bonding/rte_eth_bond_alb.c
index 38f5c4d..e11a3bd 100644
--- a/drivers/net/bonding/rte_eth_bond_alb.c
+++ b/drivers/net/bonding/rte_eth_bond_alb.c
@@ -85,7 +85,7 @@ bond_mode_alb_enable(struct rte_eth_dev *bond_dev)
 			512 * RTE_MAX_ETHPORTS,
 			RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
 				32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
-			0, data_size, socket_id);
+			0, data_size, socket_id, NULL);
 
 		if (internals->mode6.mempool == NULL) {
 			RTE_LOG(ERR, PMD, "%s: Failed to initialize ALB mempool.\n",
diff --git a/examples/bond/main.c b/examples/bond/main.c
index 6402c6b..9061092 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -746,7 +746,7 @@ main(int argc, char *argv[])
 		rte_exit(EXIT_FAILURE, "You can have max 4 ports\n");
 
 	mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", NB_MBUF, 32,
-		0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+		0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id(), NULL);
 	if (mbuf_pool == NULL)
 		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
 
diff --git a/examples/distributor/main.c b/examples/distributor/main.c
index 537cee1..ed78dd8 100644
--- a/examples/distributor/main.c
+++ b/examples/distributor/main.c
@@ -534,7 +534,7 @@ main(int argc, char *argv[])
 
 	mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL",
 		NUM_MBUFS * nb_ports, MBUF_CACHE_SIZE, 0,
-		RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+		RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id(), NULL);
 	if (mbuf_pool == NULL)
 		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
 	nb_ports_available = nb_ports;
diff --git a/examples/dpdk_qat/main.c b/examples/dpdk_qat/main.c
index aa9b1d5..d18076f 100644
--- a/examples/dpdk_qat/main.c
+++ b/examples/dpdk_qat/main.c
@@ -612,7 +612,8 @@ init_mem(void)
 			snprintf(s, sizeof(s), "mbuf_pool_%d", socketid);
 			pktmbuf_pool[socketid] =
 				rte_pktmbuf_pool_create(s, NB_MBUF, 32, 0,
-					RTE_MBUF_DEFAULT_BUF_SIZE, socketid);
+					RTE_MBUF_DEFAULT_BUF_SIZE, socketid,
+					NULL);
 			if (pktmbuf_pool[socketid] == NULL) {
 				printf("Cannot init mbuf pool on socket %d\n", socketid);
 				return -1;
diff --git a/examples/ethtool/ethtool-app/main.c b/examples/ethtool/ethtool-app/main.c
index 2c655d8..6f64729 100644
--- a/examples/ethtool/ethtool-app/main.c
+++ b/examples/ethtool/ethtool-app/main.c
@@ -139,8 +139,8 @@ static void setup_ports(struct app_config *app_cfg, int cnt_ports)
 			size_pktpool, PKTPOOL_CACHE,
 			0,
 			RTE_MBUF_DEFAULT_BUF_SIZE,
-			rte_socket_id()
-			);
+			rte_socket_id(),
+			NULL);
 		if (ptr_port->pkt_pool == NULL)
 			rte_exit(EXIT_FAILURE,
 				"rte_pktmbuf_pool_create failed"
diff --git a/examples/exception_path/main.c b/examples/exception_path/main.c
index 73d50b6..c53f034 100644
--- a/examples/exception_path/main.c
+++ b/examples/exception_path/main.c
@@ -530,7 +530,8 @@ main(int argc, char** argv)
 
 	/* Create the mbuf pool */
 	pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
-			MEMPOOL_CACHE_SZ, 0, MBUF_DATA_SZ, rte_socket_id());
+			MEMPOOL_CACHE_SZ, 0, MBUF_DATA_SZ, rte_socket_id(),
+			NULL);
 	if (pktmbuf_pool == NULL) {
 		FATAL_ERROR("Could not initialise mbuf pool");
 		return -1;
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index e1e32c6..657c9da 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -742,7 +742,7 @@ init_mem(void)
 			snprintf(buf, sizeof(buf), "pool_direct_%i", socket);
 
 			mp = rte_pktmbuf_pool_create(buf, NB_MBUF, 32,
-				0, RTE_MBUF_DEFAULT_BUF_SIZE, socket);
+				0, RTE_MBUF_DEFAULT_BUF_SIZE, socket, NULL);
 			if (mp == NULL) {
 				RTE_LOG(ERR, IP_FRAG, "Cannot create direct mempool\n");
 				return -1;
@@ -756,7 +756,7 @@ init_mem(void)
 			snprintf(buf, sizeof(buf), "pool_indirect_%i", socket);
 
 			mp = rte_pktmbuf_pool_create(buf, NB_MBUF, 32, 0, 0,
-				socket);
+				socket, NULL);
 			if (mp == NULL) {
 				RTE_LOG(ERR, IP_FRAG, "Cannot create indirect mempool\n");
 				return -1;
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 5d04eb3..462939e 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -1384,7 +1384,7 @@ pool_init(struct socket_ctx *ctx, int32_t socket_id, uint32_t nb_mbuf)
 	ctx->mbuf_pool = rte_pktmbuf_pool_create(s, nb_mbuf,
 			MEMPOOL_CACHE_SIZE, ipsec_metadata_size(),
 			RTE_MBUF_DEFAULT_BUF_SIZE,
-			socket_id);
+			socket_id, NULL);
 	if (ctx->mbuf_pool == NULL)
 		rte_exit(EXIT_FAILURE, "Cannot init mbuf pool on socket %d\n",
 				socket_id);
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index 708d76e..f28c55d 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -690,19 +690,19 @@ main(int argc, char **argv)
 
 	/* create the mbuf pools */
 	packet_pool = rte_pktmbuf_pool_create("packet_pool", NB_PKT_MBUF, 32,
-		0, PKT_MBUF_DATA_SIZE, rte_socket_id());
+		0, PKT_MBUF_DATA_SIZE, rte_socket_id(), NULL);
 
 	if (packet_pool == NULL)
 		rte_exit(EXIT_FAILURE, "Cannot init packet mbuf pool\n");
 
 	header_pool = rte_pktmbuf_pool_create("header_pool", NB_HDR_MBUF, 32,
-		0, HDR_MBUF_DATA_SIZE, rte_socket_id());
+		0, HDR_MBUF_DATA_SIZE, rte_socket_id(), NULL);
 
 	if (header_pool == NULL)
 		rte_exit(EXIT_FAILURE, "Cannot init header mbuf pool\n");
 
 	clone_pool = rte_pktmbuf_pool_create("clone_pool", NB_CLONE_MBUF, 32,
-		0, 0, rte_socket_id());
+		0, 0, rte_socket_id(), NULL);
 
 	if (clone_pool == NULL)
 		rte_exit(EXIT_FAILURE, "Cannot init clone mbuf pool\n");
diff --git a/examples/kni/main.c b/examples/kni/main.c
index 57313d1..edae0bd 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -865,7 +865,7 @@ main(int argc, char** argv)
 
 	/* Create the mbuf pool */
 	pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
-		MEMPOOL_CACHE_SZ, 0, MBUF_DATA_SZ, rte_socket_id());
+		MEMPOOL_CACHE_SZ, 0, MBUF_DATA_SZ, rte_socket_id(), NULL);
 	if (pktmbuf_pool == NULL) {
 		rte_exit(EXIT_FAILURE, "Could not initialise mbuf pool\n");
 		return -1;
diff --git a/examples/l2fwd-cat/l2fwd-cat.c b/examples/l2fwd-cat/l2fwd-cat.c
index 8cce33b..bc2bf9b 100644
--- a/examples/l2fwd-cat/l2fwd-cat.c
+++ b/examples/l2fwd-cat/l2fwd-cat.c
@@ -203,7 +203,8 @@ main(int argc, char *argv[])
 
 	/* Creates a new mempool in memory to hold the mbufs. */
 	mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", NUM_MBUFS * nb_ports,
-		MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+		MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id(),
+		NULL);
 
 	if (mbuf_pool == NULL)
 		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index ffce5f3..05cd7f3 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -1972,7 +1972,7 @@ main(int argc, char **argv)
 	/* create the mbuf pool */
 	l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, 512,
 			sizeof(struct rte_crypto_op),
-			RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+			RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id(), NULL);
 	if (l2fwd_pktmbuf_pool == NULL)
 		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
 
diff --git a/examples/l2fwd-jobstats/main.c b/examples/l2fwd-jobstats/main.c
index dd9201b..ff24c65 100644
--- a/examples/l2fwd-jobstats/main.c
+++ b/examples/l2fwd-jobstats/main.c
@@ -802,7 +802,7 @@ main(int argc, char **argv)
 	/* create the mbuf pool */
 	l2fwd_pktmbuf_pool =
 		rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, 32,
-			0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+			0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id(), NULL);
 	if (l2fwd_pktmbuf_pool == NULL)
 		rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
 
diff --git a/examples/l2fwd-keepalive/main.c b/examples/l2fwd-keepalive/main.c
index 60cccdb..8ab99e5 100644
--- a/examples/l2fwd-keepalive/main.c
+++ b/examples/l2fwd-keepalive/main.c
@@ -571,7 +571,7 @@ main(int argc, char **argv)
 
 	/* create the mbuf pool */
 	l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, 32,
-		0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+		0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id(), NULL);
 	if (l2fwd_pktmbuf_pool == NULL)
 		rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
 
diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c
index 3827aa4..41ac1e1 100644
--- a/examples/l2fwd/main.c
+++ b/examples/l2fwd/main.c
@@ -547,7 +547,7 @@ main(int argc, char **argv)
 	/* create the mbuf pool */
 	l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
 		MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
-		rte_socket_id());
+		rte_socket_id(), NULL);
 	if (l2fwd_pktmbuf_pool == NULL)
 		rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
 
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index 3cfbb40..080f3dd 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -1815,7 +1815,7 @@ init_mem(unsigned nb_mbuf)
 				rte_pktmbuf_pool_create(s, nb_mbuf,
 					MEMPOOL_CACHE_SIZE, 0,
 					RTE_MBUF_DEFAULT_BUF_SIZE,
-					socketid);
+					socketid, NULL);
 			if (pktmbuf_pool[socketid] == NULL)
 				rte_exit(EXIT_FAILURE,
 					"Cannot init mbuf pool on socket %d\n",
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index b65d683..e3f98ed 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -1452,7 +1452,7 @@ init_mem(unsigned nb_mbuf)
 				rte_pktmbuf_pool_create(s, nb_mbuf,
 					MEMPOOL_CACHE_SIZE, 0,
 					RTE_MBUF_DEFAULT_BUF_SIZE,
-					socketid);
+					socketid, NULL);
 			if (pktmbuf_pool[socketid] == NULL)
 				rte_exit(EXIT_FAILURE,
 					"Cannot init mbuf pool on socket %d\n",
diff --git a/examples/l3fwd-vf/main.c b/examples/l3fwd-vf/main.c
index f56e8db..f4ff35a 100644
--- a/examples/l3fwd-vf/main.c
+++ b/examples/l3fwd-vf/main.c
@@ -928,7 +928,7 @@ init_mem(unsigned nb_mbuf)
 			snprintf(s, sizeof(s), "mbuf_pool_%d", socketid);
 			pktmbuf_pool[socketid] = rte_pktmbuf_pool_create(s,
 				nb_mbuf, MEMPOOL_CACHE_SIZE, 0,
-				RTE_MBUF_DEFAULT_BUF_SIZE, socketid);
+				RTE_MBUF_DEFAULT_BUF_SIZE, socketid, NULL);
 			if (pktmbuf_pool[socketid] == NULL)
 				rte_exit(EXIT_FAILURE, "Cannot init mbuf pool on socket %d\n", socketid);
 			else
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 7223e77..328bae2 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -723,7 +723,8 @@ init_mem(unsigned nb_mbuf)
 			pktmbuf_pool[socketid] =
 				rte_pktmbuf_pool_create(s, nb_mbuf,
 					MEMPOOL_CACHE_SIZE, 0,
-					RTE_MBUF_DEFAULT_BUF_SIZE, socketid);
+					RTE_MBUF_DEFAULT_BUF_SIZE, socketid,
+					NULL);
 			if (pktmbuf_pool[socketid] == NULL)
 				rte_exit(EXIT_FAILURE,
 					"Cannot init mbuf pool on socket %d\n",
diff --git a/examples/link_status_interrupt/main.c b/examples/link_status_interrupt/main.c
index 14a038b..28da420 100644
--- a/examples/link_status_interrupt/main.c
+++ b/examples/link_status_interrupt/main.c
@@ -571,7 +571,7 @@ main(int argc, char **argv)
 	/* create the mbuf pool */
 	lsi_pktmbuf_pool =
 		rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, 32, 0,
-			RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+			RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id(), NULL);
 	if (lsi_pktmbuf_pool == NULL)
 		rte_panic("Cannot init mbuf pool\n");
 
diff --git a/examples/load_balancer/init.c b/examples/load_balancer/init.c
index e07850b..b541625 100644
--- a/examples/load_balancer/init.c
+++ b/examples/load_balancer/init.c
@@ -130,7 +130,7 @@ app_init_mbuf_pools(void)
 		app.pools[socket] = rte_pktmbuf_pool_create(
 			name, APP_DEFAULT_MEMPOOL_BUFFERS,
 			APP_DEFAULT_MEMPOOL_CACHE_SIZE,
-			0, APP_DEFAULT_MBUF_DATA_SIZE, socket);
+			0, APP_DEFAULT_MBUF_DATA_SIZE, socket, NULL);
 		if (app.pools[socket] == NULL) {
 			rte_panic("Cannot create mbuf pool on socket %u\n", socket);
 		}
diff --git a/examples/multi_process/client_server_mp/mp_server/init.c b/examples/multi_process/client_server_mp/mp_server/init.c
index ad941a7..0470333 100644
--- a/examples/multi_process/client_server_mp/mp_server/init.c
+++ b/examples/multi_process/client_server_mp/mp_server/init.c
@@ -101,7 +101,8 @@ init_mbuf_pools(void)
 	printf("Creating mbuf pool '%s' [%u mbufs] ...\n",
 			PKTMBUF_POOL_NAME, num_mbufs);
 	pktmbuf_pool = rte_pktmbuf_pool_create(PKTMBUF_POOL_NAME, num_mbufs,
-		MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+		MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id(),
+		NULL);
 
 	return pktmbuf_pool == NULL; /* 0  on success */
 }
diff --git a/examples/multi_process/symmetric_mp/main.c b/examples/multi_process/symmetric_mp/main.c
index d30ff4a..56479dd 100644
--- a/examples/multi_process/symmetric_mp/main.c
+++ b/examples/multi_process/symmetric_mp/main.c
@@ -446,7 +446,7 @@ main(int argc, char **argv)
 			rte_mempool_lookup(_SMP_MBUF_POOL) :
 			rte_pktmbuf_pool_create(_SMP_MBUF_POOL, NB_MBUFS,
 				MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
-				rte_socket_id());
+				rte_socket_id(), NULL);
 	if (mp == NULL)
 		rte_exit(EXIT_FAILURE, "Cannot get memory pool for buffers\n");
 
diff --git a/examples/netmap_compat/bridge/bridge.c b/examples/netmap_compat/bridge/bridge.c
index 53f5fdb..b432100 100644
--- a/examples/netmap_compat/bridge/bridge.c
+++ b/examples/netmap_compat/bridge/bridge.c
@@ -272,7 +272,7 @@ int main(int argc, char *argv[])
 		rte_exit(EXIT_FAILURE, "Not enough ethernet ports available\n");
 
 	pool = rte_pktmbuf_pool_create("mbuf_pool", MBUF_PER_POOL, 32, 0,
-		MBUF_DATA_SIZE, rte_socket_id());
+		MBUF_DATA_SIZE, rte_socket_id(), NULL);
 	if (pool == NULL)
 		rte_exit(EXIT_FAILURE, "Couldn't create mempool\n");
 
diff --git a/examples/packet_ordering/main.c b/examples/packet_ordering/main.c
index 3c88b86..7657aaa 100644
--- a/examples/packet_ordering/main.c
+++ b/examples/packet_ordering/main.c
@@ -665,7 +665,7 @@ main(int argc, char **argv)
 
 	mbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", MBUF_PER_POOL,
 			MBUF_POOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
-			rte_socket_id());
+			rte_socket_id(), NULL);
 	if (mbuf_pool == NULL)
 		rte_exit(EXIT_FAILURE, "%s\n", rte_strerror(rte_errno));
 
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index fdc90b2..93fd866 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -3359,7 +3359,7 @@ init_mem(unsigned nb_mbuf)
 			pktmbuf_pool[socketid] =
 				rte_pktmbuf_pool_create(s, nb_mbuf,
 					MEMPOOL_CACHE_SIZE, 0,
-					RTE_MBUF_DEFAULT_BUF_SIZE, socketid);
+					RTE_MBUF_DEFAULT_BUF_SIZE, socketid, NULL);
 			if (pktmbuf_pool[socketid] == NULL)
 				rte_exit(EXIT_FAILURE,
 						"Cannot init mbuf pool on socket %d\n", socketid);
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index 0af4f3b..25603dc 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -744,7 +744,8 @@ main(int argc, char *argv[])
 
 	/* Creates a new mempool in memory to hold the mbufs. */
 	mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", NUM_MBUFS * nb_ports,
-		MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+		MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id(),
+		NULL);
 
 	if (mbuf_pool == NULL)
 		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
index 1565615..ec4a7d4 100644
--- a/examples/qos_meter/main.c
+++ b/examples/qos_meter/main.c
@@ -328,7 +328,7 @@ main(int argc, char **argv)
 
 	/* Buffer pool init */
 	pool = rte_pktmbuf_pool_create("pool", NB_MBUF, MEMPOOL_CACHE_SIZE,
-		0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+		0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id(), NULL);
 	if (pool == NULL)
 		rte_exit(EXIT_FAILURE, "Buffer pool creation error\n");
 
diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
index 70e12bb..2b89470 100644
--- a/examples/qos_sched/init.c
+++ b/examples/qos_sched/init.c
@@ -338,7 +338,7 @@ int app_init(void)
 		qos_conf[i].mbuf_pool = rte_pktmbuf_pool_create(pool_name,
 			mp_size, burst_conf.rx_burst * 4, 0,
 			RTE_MBUF_DEFAULT_BUF_SIZE,
-			rte_eth_dev_socket_id(qos_conf[i].rx_port));
+			rte_eth_dev_socket_id(qos_conf[i].rx_port), NULL);
 		if (qos_conf[i].mbuf_pool == NULL)
 			rte_exit(EXIT_FAILURE, "Cannot init mbuf pool for socket %u\n", i);
 
diff --git a/examples/quota_watermark/qw/main.c b/examples/quota_watermark/qw/main.c
index 8ed0214..6031d50 100644
--- a/examples/quota_watermark/qw/main.c
+++ b/examples/quota_watermark/qw/main.c
@@ -336,7 +336,7 @@ main(int argc, char **argv)
 
     /* Create a pool of mbuf to store packets */
     mbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", MBUF_PER_POOL, 32, 0,
-	    MBUF_DATA_SIZE, rte_socket_id());
+	    MBUF_DATA_SIZE, rte_socket_id(), NULL);
     if (mbuf_pool == NULL)
         rte_panic("%s\n", rte_strerror(rte_errno));
 
diff --git a/examples/rxtx_callbacks/main.c b/examples/rxtx_callbacks/main.c
index 048b23f..22c280c 100644
--- a/examples/rxtx_callbacks/main.c
+++ b/examples/rxtx_callbacks/main.c
@@ -205,7 +205,7 @@ main(int argc, char *argv[])
 
 	mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL",
 		NUM_MBUFS * nb_ports, MBUF_CACHE_SIZE, 0,
-		RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+		RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id(), NULL);
 	if (mbuf_pool == NULL)
 		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
 
diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c
index c89822c..2acedac 100644
--- a/examples/skeleton/basicfwd.c
+++ b/examples/skeleton/basicfwd.c
@@ -190,7 +190,8 @@ main(int argc, char *argv[])
 
 	/* Creates a new mempool in memory to hold the mbufs. */
 	mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", NUM_MBUFS * nb_ports,
-		MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+		MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id(),
+		NULL);
 
 	if (mbuf_pool == NULL)
 		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 92a9823..20066e0 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -1397,7 +1397,7 @@ create_mbuf_pool(uint16_t nr_port, uint32_t nr_switch_core, uint32_t mbuf_size,
 
 	mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", nr_mbufs,
 					    nr_mbuf_cache, 0, mbuf_size,
-					    rte_socket_id());
+					    rte_socket_id(), NULL);
 	if (mbuf_pool == NULL)
 		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
 }
diff --git a/examples/vhost_xen/main.c b/examples/vhost_xen/main.c
index 2e40357..af2604f 100644
--- a/examples/vhost_xen/main.c
+++ b/examples/vhost_xen/main.c
@@ -1461,7 +1461,7 @@ main(int argc, char *argv[])
 	/* Create the mbuf pool. */
 	mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL",
 		NUM_MBUFS_PER_PORT * valid_num_ports, MBUF_CACHE_SIZE, 0,
-		RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+		RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id(), NULL);
 	if (mbuf_pool == NULL)
 		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
 
diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c
index f639355..28ee7e2 100644
--- a/examples/vmdq/main.c
+++ b/examples/vmdq/main.c
@@ -612,7 +612,7 @@ main(int argc, char *argv[])
 
 	mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL",
 		NUM_MBUFS_PER_PORT * nb_ports, MBUF_CACHE_SIZE,
-		0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+		0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id(), NULL);
 	if (mbuf_pool == NULL)
 		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
 
diff --git a/examples/vmdq_dcb/main.c b/examples/vmdq_dcb/main.c
index 35ffffa..91d1d90 100644
--- a/examples/vmdq_dcb/main.c
+++ b/examples/vmdq_dcb/main.c
@@ -675,7 +675,7 @@ main(int argc, char *argv[])
 
 	mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL",
 		NUM_MBUFS_PER_PORT * nb_ports, MBUF_CACHE_SIZE,
-		0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+		0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id(), NULL);
 	if (mbuf_pool == NULL)
 		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
 
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 72f9280..3e9cbb6 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -58,6 +58,7 @@
 #include <rte_string_fns.h>
 #include <rte_hexdump.h>
 #include <rte_errno.h>
+#include <rte_compat.h>
 
 /*
  * ctrlmbuf constructor, given as a callback function to
@@ -148,9 +149,9 @@ rte_pktmbuf_init(struct rte_mempool *mp,
 
 /* helper to create a mbuf pool */
 struct rte_mempool *
-rte_pktmbuf_pool_create(const char *name, unsigned n,
+rte_pktmbuf_pool_create_v1611(const char *name, unsigned n,
 	unsigned cache_size, uint16_t priv_size, uint16_t data_room_size,
-	int socket_id)
+	int socket_id, const char *mempool_ops)
 {
 	struct rte_mempool *mp;
 	struct rte_pktmbuf_pool_private mbp_priv;
@@ -173,8 +174,10 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 	if (mp == NULL)
 		return NULL;
 
-	ret = rte_mempool_set_ops_byname(mp,
-		RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL);
+	if (mempool_ops == NULL)
+		mempool_ops = RTE_MBUF_DEFAULT_MEMPOOL_OPS;
+
+	ret = rte_mempool_set_ops_byname(mp, mempool_ops, NULL);
 	if (ret != 0) {
 		RTE_LOG(ERR, MBUF, "error setting mempool handler\n");
 		rte_mempool_free(mp);
@@ -194,6 +197,22 @@ rte_pktmbuf_pool_create(const char *name, unsigned n,
 
 	return mp;
 }
+VERSION_SYMBOL(rte_pktmbuf_pool_create, _v21, 2.1);
+
+struct rte_mempool *
+rte_pktmbuf_pool_create_v21(const char *name, unsigned n,
+	unsigned cache_size, uint16_t priv_size, uint16_t data_room_size,
+	int socket_id)
+{
+	return rte_pktmbuf_pool_create_v1611(name, n, cache_size, priv_size,
+		data_room_size, socket_id, NULL);
+}
+BIND_DEFAULT_SYMBOL(rte_pktmbuf_pool_create, _v1611, 16.11);
+MAP_STATIC_SYMBOL(struct rte_mempool * rte_pktmbuf_pool_create(const char *name,
+		unsigned n, unsigned cache_size, uint16_t priv_size,
+		uint16_t data_room_size, int socket_id,
+		const char *mempool_ops),
+	rte_pktmbuf_pool_create_v1611);
 
 /* do some sanity checks on a mbuf: panic if it fails */
 void
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 23b7bf8..774e071 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -1330,6 +1330,10 @@ void rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg);
  *   The socket identifier where the memory should be allocated. The
  *   value can be *SOCKET_ID_ANY* if there is no NUMA constraint for the
  *   reserved zone.
+ * @param mempool_ops
+ *   String identifying the the mempool operations to be used. If set
+ *   to NULL, use the default mempool operations defined at compile-time
+ *   (RTE_MBUF_DEFAULT_MEMPOOL_OPS).
  * @return
  *   The pointer to the new allocated mempool, on success. NULL on error
  *   with rte_errno set appropriately. Possible rte_errno values include:
@@ -1343,8 +1347,19 @@ void rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg);
 struct rte_mempool *
 rte_pktmbuf_pool_create(const char *name, unsigned n,
 	unsigned cache_size, uint16_t priv_size, uint16_t data_room_size,
+	int socket_id, const char *mempool_ops);
+
+struct rte_mempool *
+rte_pktmbuf_pool_create_v21(const char *name, unsigned n,
+	unsigned cache_size, uint16_t priv_size, uint16_t data_room_size,
 	int socket_id);
 
+struct rte_mempool *
+rte_pktmbuf_pool_create_v1611(const char *name, unsigned n,
+	unsigned cache_size, uint16_t priv_size, uint16_t data_room_size,
+	int socket_id, const char *mempool_ops);
+
+
 /**
  * Get the data room size of mbufs stored in a pktmbuf_pool
  *
diff --git a/lib/librte_mbuf/rte_mbuf_version.map b/lib/librte_mbuf/rte_mbuf_version.map
index e10f6bd..606509d 100644
--- a/lib/librte_mbuf/rte_mbuf_version.map
+++ b/lib/librte_mbuf/rte_mbuf_version.map
@@ -18,3 +18,10 @@ DPDK_2.1 {
 	rte_pktmbuf_pool_create;
 
 } DPDK_2.0;
+
+DPDK_16.11 {
+	global:
+
+	rte_pktmbuf_pool_create;
+
+} DPDK_2.1;
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC 2/7] mbuf: use helper to create the pool
  2016-09-19 13:42 [RFC 0/7] changing mbuf pool handler Olivier Matz
  2016-09-19 13:42 ` [RFC 1/7] mbuf: set the handler at mbuf pool creation Olivier Matz
@ 2016-09-19 13:42 ` Olivier Matz
  2017-01-16 15:30   ` Santosh Shukla
  2016-09-19 13:42 ` [RFC 3/7] testpmd: new parameter to set mbuf pool ops Olivier Matz
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 15+ messages in thread
From: Olivier Matz @ 2016-09-19 13:42 UTC (permalink / raw)
  To: dev; +Cc: jerin.jacob, hemant.agrawal, david.hunt

When possible, replace the uses of rte_mempool_create() with
the helper provided in librte_mbuf: rte_pktmbuf_pool_create().

This is the preferred way to create a mbuf pool.

By the way, akso update the documentation.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test/test_link_bonding_rssconf.c               | 11 ++++----
 doc/guides/prog_guide/mbuf_lib.rst                 |  2 +-
 doc/guides/sample_app_ug/ip_reassembly.rst         | 13 +++++----
 doc/guides/sample_app_ug/ipv4_multicast.rst        | 12 ++++----
 doc/guides/sample_app_ug/l2_forward_job_stats.rst  | 33 ++++++++--------------
 .../sample_app_ug/l2_forward_real_virtual.rst      | 26 +++++++----------
 doc/guides/sample_app_ug/ptpclient.rst             | 12 ++------
 doc/guides/sample_app_ug/quota_watermark.rst       | 26 ++++++-----------
 drivers/net/bonding/rte_eth_bond_8023ad.c          | 13 ++++-----
 examples/ip_pipeline/init.c                        | 19 ++++++-------
 examples/ip_reassembly/main.c                      | 16 +++++------
 examples/multi_process/l2fwd_fork/main.c           | 14 ++++-----
 examples/tep_termination/main.c                    | 17 ++++++-----
 lib/librte_mbuf/rte_mbuf.c                         |  7 +++--
 lib/librte_mbuf/rte_mbuf.h                         | 29 +++++++++++--------
 15 files changed, 111 insertions(+), 139 deletions(-)

diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index 34f1c16..dd1bcc7 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -67,7 +67,7 @@
 #define SLAVE_RXTX_QUEUE_FMT      ("rssconf_slave%d_q%d")
 
 #define NUM_MBUFS 8191
-#define MBUF_SIZE (1600 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
+#define MBUF_SIZE (1600 + RTE_PKTMBUF_HEADROOM)
 #define MBUF_CACHE_SIZE 250
 #define BURST_SIZE 32
 
@@ -536,13 +536,12 @@ test_setup(void)
 
 	if (test_params.mbuf_pool == NULL) {
 
-		test_params.mbuf_pool = rte_mempool_create("RSS_MBUF_POOL", NUM_MBUFS *
-				SLAVE_COUNT, MBUF_SIZE, MBUF_CACHE_SIZE,
-				sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init,
-				NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);
+		test_params.mbuf_pool = rte_pktmbuf_pool_create(
+			"RSS_MBUF_POOL", NUM_MBUFS * SLAVE_COUNT,
+			MBUF_CACHE_SIZE, 0, MBUF_SIZE, rte_socket_id(), NULL);
 
 		TEST_ASSERT(test_params.mbuf_pool != NULL,
-				"rte_mempool_create failed\n");
+				"rte_pktmbuf_pool_create failed\n");
 	}
 
 	/* Create / initialize ring eth devs. */
diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst
index 8e61682..b366e04 100644
--- a/doc/guides/prog_guide/mbuf_lib.rst
+++ b/doc/guides/prog_guide/mbuf_lib.rst
@@ -103,7 +103,7 @@ Constructors
 Packet and control mbuf constructors are provided by the API.
 The rte_pktmbuf_init() and rte_ctrlmbuf_init() functions initialize some fields in the mbuf structure that
 are not modified by the user once created (mbuf type, origin pool, buffer start address, and so on).
-This function is given as a callback function to the rte_mempool_create() function at pool creation time.
+This function is given as a callback function to the rte_pktmbuf_pool_create() or the rte_mempool_create() function at pool creation time.
 
 Allocating and Freeing mbufs
 ----------------------------
diff --git a/doc/guides/sample_app_ug/ip_reassembly.rst b/doc/guides/sample_app_ug/ip_reassembly.rst
index 3c5cc70..4b6023a 100644
--- a/doc/guides/sample_app_ug/ip_reassembly.rst
+++ b/doc/guides/sample_app_ug/ip_reassembly.rst
@@ -223,11 +223,14 @@ each RX queue uses its own mempool.
 
     snprintf(buf, sizeof(buf), "mbuf_pool_%u_%u", lcore, queue);
 
-    if ((rxq->pool = rte_mempool_create(buf, nb_mbuf, MBUF_SIZE, 0, sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init, NULL,
-        rte_pktmbuf_init, NULL, socket, MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) == NULL) {
-
-            RTE_LOG(ERR, IP_RSMBL, "mempool_create(%s) failed", buf);
-            return -1;
+    rxq->pool = rte_pktmbuf_pool_create(buf, nb_mbuf,
+    	0, /* cache size */
+    	0, /* priv size */
+    	MBUF_DATA_SIZE, socket, "ring_sp_sc");
+    if (rxq->pool == NULL) {
+    	RTE_LOG(ERR, IP_RSMBL,
+    		"rte_pktmbuf_pool_create(%s) failed", buf);
+    	return -1;
     }
 
 Packet Reassembly and Forwarding
diff --git a/doc/guides/sample_app_ug/ipv4_multicast.rst b/doc/guides/sample_app_ug/ipv4_multicast.rst
index 72da8c4..099d61a 100644
--- a/doc/guides/sample_app_ug/ipv4_multicast.rst
+++ b/doc/guides/sample_app_ug/ipv4_multicast.rst
@@ -145,12 +145,12 @@ Memory pools for indirect buffers are initialized differently from the memory po
 
 .. code-block:: c
 
-    packet_pool = rte_mempool_create("packet_pool", NB_PKT_MBUF, PKT_MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private),
-                                     rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);
-
-    header_pool = rte_mempool_create("header_pool", NB_HDR_MBUF, HDR_MBUF_SIZE, 32, 0, NULL, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);
-    clone_pool = rte_mempool_create("clone_pool", NB_CLONE_MBUF,
-    CLONE_MBUF_SIZE, 32, 0, NULL, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);
+    packet_pool = rte_pktmbuf_pool_create("packet_pool", NB_PKT_MBUF, 32,
+    	0, PKT_MBUF_DATA_SIZE, rte_socket_id(), NULL);
+    header_pool = rte_pktmbuf_pool_create("header_pool", NB_HDR_MBUF, 32,
+    	0, HDR_MBUF_DATA_SIZE, rte_socket_id(), NULL);
+    clone_pool = rte_pktmbuf_pool_create("clone_pool", NB_CLONE_MBUF, 32,
+    	0, 0, rte_socket_id(), NULL);
 
 The reason for this is because indirect buffers are not supposed to hold any packet data and
 therefore can be initialized with lower amount of reserved memory for each buffer.
diff --git a/doc/guides/sample_app_ug/l2_forward_job_stats.rst b/doc/guides/sample_app_ug/l2_forward_job_stats.rst
index 2444e36..a1b3f43 100644
--- a/doc/guides/sample_app_ug/l2_forward_job_stats.rst
+++ b/doc/guides/sample_app_ug/l2_forward_job_stats.rst
@@ -193,36 +193,25 @@ and the application to store network packet data:
 .. code-block:: c
 
     /* create the mbuf pool */
-    l2fwd_pktmbuf_pool =
-        rte_mempool_create("mbuf_pool", NB_MBUF,
-                   MBUF_SIZE, 32,
-                   sizeof(struct rte_pktmbuf_pool_private),
-                   rte_pktmbuf_pool_init, NULL,
-                   rte_pktmbuf_init, NULL,
-                   rte_socket_id(), 0);
+    l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
+    	MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
+    	rte_socket_id(), NULL);
 
     if (l2fwd_pktmbuf_pool == NULL)
         rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
 
 The rte_mempool is a generic structure used to handle pools of objects.
-In this case, it is necessary to create a pool that will be used by the driver,
-which expects to have some reserved space in the mempool structure,
-sizeof(struct rte_pktmbuf_pool_private) bytes.
-The number of allocated pkt mbufs is NB_MBUF, with a size of MBUF_SIZE each.
-A per-lcore cache of 32 mbufs is kept.
+In this case, it is necessary to create a pool that will be used by the driver.
+The number of allocated pkt mbufs is NB_MBUF, with a data room size of
+RTE_MBUF_DEFAULT_BUF_SIZE each.
+A per-lcore cache of MEMPOOL_CACHE_SIZE mbufs is kept.
 The memory is allocated in rte_socket_id() socket,
 but it is possible to extend this code to allocate one mbuf pool per socket.
 
-Two callback pointers are also given to the rte_mempool_create() function:
-
-*   The first callback pointer is to rte_pktmbuf_pool_init() and is used
-    to initialize the private data of the mempool, which is needed by the driver.
-    This function is provided by the mbuf API, but can be copied and extended by the developer.
-
-*   The second callback pointer given to rte_mempool_create() is the mbuf initializer.
-    The default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library.
-    If a more complex application wants to extend the rte_pktmbuf structure for its own needs,
-    a new function derived from rte_pktmbuf_init( ) can be created.
+The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf
+initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init().
+An advanced application may want to use the mempool API to create the
+mbuf pool with more control.
 
 Driver Initialization
 ~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/sample_app_ug/l2_forward_real_virtual.rst b/doc/guides/sample_app_ug/l2_forward_real_virtual.rst
index a1c10c0..2330148 100644
--- a/doc/guides/sample_app_ug/l2_forward_real_virtual.rst
+++ b/doc/guides/sample_app_ug/l2_forward_real_virtual.rst
@@ -197,31 +197,25 @@ and the application to store network packet data:
 
     /* create the mbuf pool */
 
-    l2fwd_pktmbuf_pool = rte_mempool_create("mbuf_pool", NB_MBUF, MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private),
-        rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, SOCKET0, 0);
+    l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
+    	MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
+    	rte_socket_id(), NULL);
 
     if (l2fwd_pktmbuf_pool == NULL)
         rte_panic("Cannot init mbuf pool\n");
 
 The rte_mempool is a generic structure used to handle pools of objects.
-In this case, it is necessary to create a pool that will be used by the driver,
-which expects to have some reserved space in the mempool structure,
-sizeof(struct rte_pktmbuf_pool_private) bytes.
-The number of allocated pkt mbufs is NB_MBUF, with a size of MBUF_SIZE each.
+In this case, it is necessary to create a pool that will be used by the driver.
+The number of allocated pkt mbufs is NB_MBUF, with a data room size of
+RTE_MBUF_DEFAULT_BUF_SIZE each.
 A per-lcore cache of 32 mbufs is kept.
 The memory is allocated in NUMA socket 0,
 but it is possible to extend this code to allocate one mbuf pool per socket.
 
-Two callback pointers are also given to the rte_mempool_create() function:
-
-*   The first callback pointer is to rte_pktmbuf_pool_init() and is used
-    to initialize the private data of the mempool, which is needed by the driver.
-    This function is provided by the mbuf API, but can be copied and extended by the developer.
-
-*   The second callback pointer given to rte_mempool_create() is the mbuf initializer.
-    The default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library.
-    If a more complex application wants to extend the rte_pktmbuf structure for its own needs,
-    a new function derived from rte_pktmbuf_init( ) can be created.
+The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf
+initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init().
+An advanced application may want to use the mempool API to create the
+mbuf pool with more control.
 
 .. _l2_fwd_app_dvr_init:
 
diff --git a/doc/guides/sample_app_ug/ptpclient.rst b/doc/guides/sample_app_ug/ptpclient.rst
index 6e425b7..4bd87c2 100644
--- a/doc/guides/sample_app_ug/ptpclient.rst
+++ b/doc/guides/sample_app_ug/ptpclient.rst
@@ -171,15 +171,9 @@ used by the application:
 
 .. code-block:: c
 
-    mbuf_pool = rte_mempool_create("MBUF_POOL",
-                                   NUM_MBUFS * nb_ports,
-                                   MBUF_SIZE,
-                                   MBUF_CACHE_SIZE,
-                                   sizeof(struct rte_pktmbuf_pool_private),
-                                   rte_pktmbuf_pool_init, NULL,
-                                   rte_pktmbuf_init,      NULL,
-                                   rte_socket_id(),
-                                   0);
+    mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", NUM_MBUFS * nb_ports,
+    	MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id(),
+    	NULL);
 
 Mbufs are the packet buffer structure used by DPDK. They are explained in
 detail in the "Mbuf Library" section of the *DPDK Programmer's Guide*.
diff --git a/doc/guides/sample_app_ug/quota_watermark.rst b/doc/guides/sample_app_ug/quota_watermark.rst
index c56683a..f3a6624 100644
--- a/doc/guides/sample_app_ug/quota_watermark.rst
+++ b/doc/guides/sample_app_ug/quota_watermark.rst
@@ -254,32 +254,24 @@ It contains a set of mbuf objects that are used by the driver and the applicatio
 .. code-block:: c
 
     /* Create a pool of mbuf to store packets */
-
-    mbuf_pool = rte_mempool_create("mbuf_pool", MBUF_PER_POOL, MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private),
-        rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);
+    mbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", MBUF_PER_POOL, 32, 0,
+    	MBUF_DATA_SIZE, rte_socket_id(), NULL);
 
     if (mbuf_pool == NULL)
         rte_panic("%s\n", rte_strerror(rte_errno));
 
 The rte_mempool is a generic structure used to handle pools of objects.
-In this case, it is necessary to create a pool that will be used by the driver,
-which expects to have some reserved space in the mempool structure, sizeof(struct rte_pktmbuf_pool_private) bytes.
+In this case, it is necessary to create a pool that will be used by the driver.
 
-The number of allocated pkt mbufs is MBUF_PER_POOL, with a size of MBUF_SIZE each.
+The number of allocated pkt mbufs is MBUF_PER_POOL, with a data room size
+of MBUF_DATA_SIZE each.
 A per-lcore cache of 32 mbufs is kept.
 The memory is allocated in on the master lcore's socket, but it is possible to extend this code to allocate one mbuf pool per socket.
 
-Two callback pointers are also given to the rte_mempool_create() function:
-
-*   The first callback pointer is to rte_pktmbuf_pool_init() and is used to initialize the private data of the mempool,
-    which is needed by the driver.
-    This function is provided by the mbuf API, but can be copied and extended by the developer.
-
-*   The second callback pointer given to rte_mempool_create() is the mbuf initializer.
-
-The default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library.
-If a more complex application wants to extend the rte_pktmbuf structure for its own needs,
-a new function derived from rte_pktmbuf_init() can be created.
+The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf
+initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init().
+An advanced application may want to use the mempool API to create the
+mbuf pool with more control.
 
 Ports Configuration and Pairing
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 2f7ae70..e234c63 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -888,8 +888,8 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev, uint8_t slave_id)
 	RTE_ASSERT(port->tx_ring == NULL);
 	socket_id = rte_eth_devices[slave_id].data->numa_node;
 
-	element_size = sizeof(struct slow_protocol_frame) + sizeof(struct rte_mbuf)
-				+ RTE_PKTMBUF_HEADROOM;
+	element_size = sizeof(struct slow_protocol_frame) +
+		RTE_PKTMBUF_HEADROOM;
 
 	/* The size of the mempool should be at least:
 	 * the sum of the TX descriptors + BOND_MODE_8023AX_SLAVE_TX_PKTS */
@@ -900,11 +900,10 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev, uint8_t slave_id)
 	}
 
 	snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool", slave_id);
-	port->mbuf_pool = rte_mempool_create(mem_name,
-		total_tx_desc, element_size,
-		RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ? 32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
-		sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init,
-		NULL, rte_pktmbuf_init, NULL, socket_id, MEMPOOL_F_NO_SPREAD);
+	port->mbuf_pool = rte_pktmbuf_pool_create(mem_name, total_tx_desc,
+		RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
+			32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
+		0, element_size, socket_id, NULL);
 
 	/* Any memory allocation failure in initalization is critical because
 	 * resources can't be free, so reinitialization is impossible. */
diff --git a/examples/ip_pipeline/init.c b/examples/ip_pipeline/init.c
index cd167f6..d86aa86 100644
--- a/examples/ip_pipeline/init.c
+++ b/examples/ip_pipeline/init.c
@@ -316,16 +316,15 @@ app_init_mempool(struct app_params *app)
 		struct app_mempool_params *p = &app->mempool_params[i];
 
 		APP_LOG(app, HIGH, "Initializing %s ...", p->name);
-		app->mempool[i] = rte_mempool_create(
-				p->name,
-				p->pool_size,
-				p->buffer_size,
-				p->cache_size,
-				sizeof(struct rte_pktmbuf_pool_private),
-				rte_pktmbuf_pool_init, NULL,
-				rte_pktmbuf_init, NULL,
-				p->cpu_socket_id,
-				0);
+		app->mempool[i] = rte_pktmbuf_pool_create(
+			p->name,
+			p->pool_size,
+			p->cache_size,
+			0, /* priv_size */
+			p->buffer_size -
+				sizeof(struct rte_mbuf), /* mbuf data size */
+			p->cpu_socket_id,
+			NULL);
 
 		if (app->mempool[i] == NULL)
 			rte_panic("%s init error\n", p->name);
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 50fe422..8648161 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -84,9 +84,7 @@
 
 #define MAX_JUMBO_PKT_LEN  9600
 
-#define	BUF_SIZE	RTE_MBUF_DEFAULT_DATAROOM
-#define MBUF_SIZE	\
-	(BUF_SIZE + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
+#define	MBUF_DATA_SIZE	RTE_MBUF_DEFAULT_BUF_SIZE
 
 #define NB_MBUF 8192
 
@@ -909,11 +907,13 @@ setup_queue_tbl(struct rx_queue *rxq, uint32_t lcore, uint32_t queue)
 
 	snprintf(buf, sizeof(buf), "mbuf_pool_%u_%u", lcore, queue);
 
-	if ((rxq->pool = rte_mempool_create(buf, nb_mbuf, MBUF_SIZE, 0,
-			sizeof(struct rte_pktmbuf_pool_private),
-			rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL,
-			socket, MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) == NULL) {
-		RTE_LOG(ERR, IP_RSMBL, "mempool_create(%s) failed", buf);
+	rxq->pool = rte_pktmbuf_pool_create(buf, nb_mbuf,
+		0, /* cache size */
+		0, /* priv size */
+		MBUF_DATA_SIZE, socket, "ring_sp_sc");
+	if (rxq->pool == NULL) {
+		RTE_LOG(ERR, IP_RSMBL,
+			"rte_pktmbuf_pool_create(%s) failed", buf);
 		return -1;
 	}
 
diff --git a/examples/multi_process/l2fwd_fork/main.c b/examples/multi_process/l2fwd_fork/main.c
index 2d951d9..358a760 100644
--- a/examples/multi_process/l2fwd_fork/main.c
+++ b/examples/multi_process/l2fwd_fork/main.c
@@ -77,8 +77,7 @@
 
 #define RTE_LOGTYPE_L2FWD RTE_LOGTYPE_USER1
 #define MBUF_NAME	"mbuf_pool_%d"
-#define MBUF_SIZE	\
-(RTE_MBUF_DEFAULT_DATAROOM + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
+#define MBUF_DATA_SIZE	RTE_MBUF_DEFAULT_BUF_SIZE
 #define NB_MBUF   8192
 #define RING_MASTER_NAME	"l2fwd_ring_m2s_"
 #define RING_SLAVE_NAME		"l2fwd_ring_s2m_"
@@ -989,14 +988,11 @@ main(int argc, char **argv)
 		flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET;
 		snprintf(buf_name, RTE_MEMPOOL_NAMESIZE, MBUF_NAME, portid);
 		l2fwd_pktmbuf_pool[portid] =
-			rte_mempool_create(buf_name, NB_MBUF,
-					   MBUF_SIZE, 32,
-					   sizeof(struct rte_pktmbuf_pool_private),
-					   rte_pktmbuf_pool_init, NULL,
-					   rte_pktmbuf_init, NULL,
-					   rte_socket_id(), flags);
+			rte_pktmbuf_pool_create(buf_name, NB_MBUF, 32,
+				0, MBUF_DATA_SIZE, rte_socket_id(),
+				NULL);
 		if (l2fwd_pktmbuf_pool[portid] == NULL)
-			rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
+			rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
 
 		printf("Create mbuf %s\n", buf_name);
 	}
diff --git a/examples/tep_termination/main.c b/examples/tep_termination/main.c
index 622f248..2b786c5 100644
--- a/examples/tep_termination/main.c
+++ b/examples/tep_termination/main.c
@@ -68,7 +68,7 @@
 				(nb_switching_cores * MBUF_CACHE_SIZE))
 
 #define MBUF_CACHE_SIZE 128
-#define MBUF_SIZE (2048 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
+#define MBUF_DATA_SIZE RTE_MBUF_DEFAULT_BUF_SIZE
 
 #define MAX_PKT_BURST 32	/* Max burst size for RX/TX */
 #define BURST_TX_DRAIN_US 100	/* TX drain every ~100us */
@@ -1200,15 +1200,14 @@ main(int argc, char *argv[])
 			MAX_SUP_PORTS);
 	}
 	/* Create the mbuf pool. */
-	mbuf_pool = rte_mempool_create(
+	mbuf_pool = rte_pktmbuf_pool_create(
 			"MBUF_POOL",
-			NUM_MBUFS_PER_PORT
-			* valid_nb_ports,
-			MBUF_SIZE, MBUF_CACHE_SIZE,
-			sizeof(struct rte_pktmbuf_pool_private),
-			rte_pktmbuf_pool_init, NULL,
-			rte_pktmbuf_init, NULL,
-			rte_socket_id(), 0);
+			NUM_MBUFS_PER_PORT * valid_nb_ports,
+			MBUF_CACHE_SIZE,
+			0,
+			MBUF_DATA_SIZE,
+			rte_socket_id(),
+			NULL);
 	if (mbuf_pool == NULL)
 		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
 
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 3e9cbb6..4b871ca 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -62,7 +62,7 @@
 
 /*
  * ctrlmbuf constructor, given as a callback function to
- * rte_mempool_create()
+ * rte_mempool_obj_iter() or rte_mempool_create()
  */
 void
 rte_ctrlmbuf_init(struct rte_mempool *mp,
@@ -77,7 +77,8 @@ rte_ctrlmbuf_init(struct rte_mempool *mp,
 
 /*
  * pktmbuf pool constructor, given as a callback function to
- * rte_mempool_create()
+ * rte_mempool_create(), or called directly if using
+ * rte_mempool_create_empty()/rte_mempool_populate()
  */
 void
 rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg)
@@ -110,7 +111,7 @@ rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg)
 
 /*
  * pktmbuf constructor, given as a callback function to
- * rte_mempool_create().
+ * rte_mempool_obj_iter() or rte_mempool_create().
  * Set the fields of a packet mbuf to their default values.
  */
 void
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 774e071..352fa02 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -44,6 +44,13 @@
  * buffers. The message buffers are stored in a mempool, using the
  * RTE mempool library.
  *
+ * The preferred way to create a mbuf pool is to use
+ * rte_pktmbuf_pool_create(). However, in some situations, an
+ * application may want to have more control (ex: populate the pool with
+ * specific memory), in this case it is possible to use functions from
+ * rte_mempool. See how rte_pktmbuf_pool_create() is implemented for
+ * details.
+ *
  * This library provide an API to allocate/free packet mbufs, which are
  * used to carry network packets.
  *
@@ -1189,14 +1196,14 @@ __rte_mbuf_raw_free(struct rte_mbuf *m)
  * This function initializes some fields in an mbuf structure that are
  * not modified by the user once created (mbuf type, origin pool, buffer
  * start address, and so on). This function is given as a callback function
- * to rte_mempool_create() at pool creation time.
+ * to rte_mempool_obj_iter() or rte_mempool_create() at pool creation time.
  *
  * @param mp
  *   The mempool from which the mbuf is allocated.
  * @param opaque_arg
  *   A pointer that can be used by the user to retrieve useful information
- *   for mbuf initialization. This pointer comes from the ``init_arg``
- *   parameter of rte_mempool_create().
+ *   for mbuf initialization. This pointer is the opaque argument passed to
+ *   rte_mempool_obj_iter() or rte_mempool_create().
  * @param m
  *   The mbuf to initialize.
  * @param i
@@ -1270,14 +1277,14 @@ rte_is_ctrlmbuf(struct rte_mbuf *m)
  * This function initializes some fields in the mbuf structure that are
  * not modified by the user once created (origin pool, buffer start
  * address, and so on). This function is given as a callback function to
- * rte_mempool_create() at pool creation time.
+ * rte_mempool_obj_iter() or rte_mempool_create() at pool creation time.
  *
  * @param mp
  *   The mempool from which mbufs originate.
  * @param opaque_arg
  *   A pointer that can be used by the user to retrieve useful information
- *   for mbuf initialization. This pointer comes from the ``init_arg``
- *   parameter of rte_mempool_create().
+ *   for mbuf initialization. This pointer is the opaque argument passed to
+ *   rte_mempool_obj_iter() or rte_mempool_create().
  * @param m
  *   The mbuf to initialize.
  * @param i
@@ -1292,7 +1299,8 @@ void rte_pktmbuf_init(struct rte_mempool *mp, void *opaque_arg,
  *
  * This function initializes the mempool private data in the case of a
  * pktmbuf pool. This private data is needed by the driver. The
- * function is given as a callback function to rte_mempool_create() at
+ * function must be called on the mempool before it is used, or it
+ * can be given as a callback function to rte_mempool_create() at
  * pool creation. It can be extended by the user, for example, to
  * provide another packet size.
  *
@@ -1300,8 +1308,8 @@ void rte_pktmbuf_init(struct rte_mempool *mp, void *opaque_arg,
  *   The mempool from which mbufs originate.
  * @param opaque_arg
  *   A pointer that can be used by the user to retrieve useful information
- *   for mbuf initialization. This pointer comes from the ``init_arg``
- *   parameter of rte_mempool_create().
+ *   for mbuf initialization. This pointer is the opaque argument passed to
+ *   rte_mempool_create().
  */
 void rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg);
 
@@ -1309,8 +1317,7 @@ void rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg);
  * Create a mbuf pool.
  *
  * This function creates and initializes a packet mbuf pool. It is
- * a wrapper to rte_mempool_create() with the proper packet constructor
- * and mempool constructor.
+ * a wrapper to rte_mempool functions.
  *
  * @param name
  *   The name of the mbuf pool.
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC 3/7] testpmd: new parameter to set mbuf pool ops
  2016-09-19 13:42 [RFC 0/7] changing mbuf pool handler Olivier Matz
  2016-09-19 13:42 ` [RFC 1/7] mbuf: set the handler at mbuf pool creation Olivier Matz
  2016-09-19 13:42 ` [RFC 2/7] mbuf: use helper to create the pool Olivier Matz
@ 2016-09-19 13:42 ` Olivier Matz
  2016-09-19 13:42 ` [RFC 4/7] l3fwd: rework long options parsing Olivier Matz
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 15+ messages in thread
From: Olivier Matz @ 2016-09-19 13:42 UTC (permalink / raw)
  To: dev; +Cc: jerin.jacob, hemant.agrawal, david.hunt

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test-pmd/parameters.c |  5 +++++
 app/test-pmd/testpmd.c    | 15 ++++++++++++++-
 app/test-pmd/testpmd.h    |  1 +
 3 files changed, 20 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 6a6a07e..cbd287d 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -131,6 +131,7 @@ usage(char* progname)
 	printf("  --total-num-mbufs=N: set the number of mbufs to be allocated "
 	       "in mbuf pools.\n");
 	printf("  --max-pkt-len=N: set the maximum size of packet to N bytes.\n");
+	printf("  --mbuf-pool-ops=<handler>: set an alternative mbuf pool handler\n");
 #ifdef RTE_LIBRTE_CMDLINE
 	printf("  --eth-peers-configfile=name: config file with ethernet addresses "
 	       "of peer ports.\n");
@@ -519,6 +520,7 @@ launch_args_parse(int argc, char** argv)
 		{ "mbuf-size",			1, 0, 0 },
 		{ "total-num-mbufs",		1, 0, 0 },
 		{ "max-pkt-len",		1, 0, 0 },
+		{ "mbuf-pool-ops",		1, 0, 0 },
 		{ "pkt-filter-mode",            1, 0, 0 },
 		{ "pkt-filter-report-hash",     1, 0, 0 },
 		{ "pkt-filter-size",            1, 0, 0 },
@@ -701,6 +703,9 @@ launch_args_parse(int argc, char** argv)
 						 "Invalid max-pkt-len=%d - should be > %d\n",
 						 n, ETHER_MIN_LEN);
 			}
+			if (!strcmp(lgopts[opt_idx].name, "mbuf-pool-ops")) {
+				mbuf_pool_ops = strdup(optarg);
+			}
 			if (!strcmp(lgopts[opt_idx].name, "pkt-filter-mode")) {
 				if (!strcmp(optarg, "signature"))
 					fdir_conf.mode =
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index cc3d2d0..669bf97 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -167,6 +167,7 @@ uint32_t burst_tx_retry_num = BURST_TX_RETRIES;
 uint16_t mbuf_data_size = DEFAULT_MBUF_DATA_SIZE; /**< Mbuf data space size. */
 uint32_t param_total_num_mbufs = 0;  /**< number of mbufs in all pools - if
                                       * specified on command-line. */
+const char *mbuf_pool_ops = NULL;
 
 /*
  * Configuration of packet segments used by the "txonly" processing engine.
@@ -419,6 +420,7 @@ mbuf_pool_create(uint16_t mbuf_seg_size, unsigned nb_mbuf,
 	char pool_name[RTE_MEMPOOL_NAMESIZE];
 	struct rte_mempool *rte_mp = NULL;
 	uint32_t mb_size;
+	int ret;
 
 	mb_size = sizeof(struct rte_mbuf) + mbuf_seg_size;
 	mbuf_poolname_build(socket_id, pool_name, sizeof(pool_name));
@@ -444,6 +446,17 @@ mbuf_pool_create(uint16_t mbuf_seg_size, unsigned nb_mbuf,
 				sizeof(struct rte_pktmbuf_pool_private),
 				socket_id, 0);
 
+			if (rte_mp != NULL) {
+				ret = rte_mempool_set_ops_byname(rte_mp,
+					mbuf_pool_ops, NULL);
+				if (ret != 0) {
+					RTE_LOG(ERR, MBUF,
+						"cannot set mempool handler\n");
+					rte_mempool_free(rte_mp);
+					rte_mp = NULL;
+				}
+			}
+
 			if (rte_mempool_populate_anon(rte_mp) == 0) {
 				rte_mempool_free(rte_mp);
 				rte_mp = NULL;
@@ -454,7 +467,7 @@ mbuf_pool_create(uint16_t mbuf_seg_size, unsigned nb_mbuf,
 			/* wrapper to rte_mempool_create() */
 			rte_mp = rte_pktmbuf_pool_create(pool_name, nb_mbuf,
 				mb_mempool_cache, 0, mbuf_seg_size, socket_id,
-				NULL);
+				mbuf_pool_ops);
 		}
 	}
 
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 2b281cc..c7bab77 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -357,6 +357,7 @@ extern enum dcb_queue_mapping_mode dcb_q_mapping;
 
 extern uint16_t mbuf_data_size; /**< Mbuf data space size. */
 extern uint32_t param_total_num_mbufs;
+extern const char *mbuf_pool_ops;  /**< mbuf pool handler to use */
 
 extern struct rte_fdir_conf fdir_conf;
 
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC 4/7] l3fwd: rework long options parsing
  2016-09-19 13:42 [RFC 0/7] changing mbuf pool handler Olivier Matz
                   ` (2 preceding siblings ...)
  2016-09-19 13:42 ` [RFC 3/7] testpmd: new parameter to set mbuf pool ops Olivier Matz
@ 2016-09-19 13:42 ` Olivier Matz
  2016-09-19 13:42 ` [RFC 5/7] l3fwd: new parameter to set mbuf pool ops Olivier Matz
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 15+ messages in thread
From: Olivier Matz @ 2016-09-19 13:42 UTC (permalink / raw)
  To: dev; +Cc: jerin.jacob, hemant.agrawal, david.hunt

Avoid the use of several strncpy() since getopt is able to
map a long option with an id, which can be matched in the
same switch/case than short options.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 examples/l3fwd/main.c | 167 ++++++++++++++++++++++++++------------------------
 1 file changed, 86 insertions(+), 81 deletions(-)

diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 328bae2..9894a3b 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -474,6 +474,13 @@ parse_eth_dest(const char *optarg)
 #define MAX_JUMBO_PKT_LEN  9600
 #define MEMPOOL_CACHE_SIZE 256
 
+static const char short_options[] =
+	"p:"  /* portmask */
+	"P"   /* promiscuous */
+	"L"   /* enable long prefix match */
+	"E"   /* enable exact match */
+	;
+
 #define CMD_LINE_OPT_CONFIG "config"
 #define CMD_LINE_OPT_ETH_DEST "eth-dest"
 #define CMD_LINE_OPT_NO_NUMA "no-numa"
@@ -481,6 +488,31 @@ parse_eth_dest(const char *optarg)
 #define CMD_LINE_OPT_ENABLE_JUMBO "enable-jumbo"
 #define CMD_LINE_OPT_HASH_ENTRY_NUM "hash-entry-num"
 #define CMD_LINE_OPT_PARSE_PTYPE "parse-ptype"
+enum {
+	/* long options mapped to a short option */
+
+	/* first long only option value must be >= 256, so that we won't
+	 * conflict with short options */
+	CMD_LINE_OPT_MIN_NUM = 256,
+	CMD_LINE_OPT_CONFIG_NUM,
+	CMD_LINE_OPT_ETH_DEST_NUM,
+	CMD_LINE_OPT_NO_NUMA_NUM,
+	CMD_LINE_OPT_IPV6_NUM,
+	CMD_LINE_OPT_ENABLE_JUMBO_NUM,
+	CMD_LINE_OPT_HASH_ENTRY_NUM_NUM,
+	CMD_LINE_OPT_PARSE_PTYPE_NUM,
+};
+
+static const struct option lgopts[] = {
+	{CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM},
+	{CMD_LINE_OPT_ETH_DEST, 1, 0, CMD_LINE_OPT_ETH_DEST_NUM},
+	{CMD_LINE_OPT_NO_NUMA, 0, 0, CMD_LINE_OPT_NO_NUMA_NUM},
+	{CMD_LINE_OPT_IPV6, 0, 0, CMD_LINE_OPT_IPV6_NUM},
+	{CMD_LINE_OPT_ENABLE_JUMBO, 0, 0, CMD_LINE_OPT_ENABLE_JUMBO_NUM},
+	{CMD_LINE_OPT_HASH_ENTRY_NUM, 1, 0, CMD_LINE_OPT_HASH_ENTRY_NUM_NUM},
+	{CMD_LINE_OPT_PARSE_PTYPE, 0, 0, CMD_LINE_OPT_PARSE_PTYPE_NUM},
+	{NULL, 0, 0, 0}
+};
 
 /*
  * This expression is used to calculate the number of mbufs needed
@@ -504,16 +536,6 @@ parse_args(int argc, char **argv)
 	char **argvopt;
 	int option_index;
 	char *prgname = argv[0];
-	static struct option lgopts[] = {
-		{CMD_LINE_OPT_CONFIG, 1, 0, 0},
-		{CMD_LINE_OPT_ETH_DEST, 1, 0, 0},
-		{CMD_LINE_OPT_NO_NUMA, 0, 0, 0},
-		{CMD_LINE_OPT_IPV6, 0, 0, 0},
-		{CMD_LINE_OPT_ENABLE_JUMBO, 0, 0, 0},
-		{CMD_LINE_OPT_HASH_ENTRY_NUM, 1, 0, 0},
-		{CMD_LINE_OPT_PARSE_PTYPE, 0, 0, 0},
-		{NULL, 0, 0, 0}
-	};
 
 	argvopt = argv;
 
@@ -563,88 +585,71 @@ parse_args(int argc, char **argv)
 			break;
 
 		/* long options */
-		case 0:
-			if (!strncmp(lgopts[option_index].name,
-					CMD_LINE_OPT_CONFIG,
-					sizeof(CMD_LINE_OPT_CONFIG))) {
-
-				ret = parse_config(optarg);
-				if (ret) {
-					printf("%s\n", str5);
-					print_usage(prgname);
-					return -1;
-				}
-			}
-
-			if (!strncmp(lgopts[option_index].name,
-					CMD_LINE_OPT_ETH_DEST,
-					sizeof(CMD_LINE_OPT_ETH_DEST))) {
-					parse_eth_dest(optarg);
-			}
-
-			if (!strncmp(lgopts[option_index].name,
-					CMD_LINE_OPT_NO_NUMA,
-					sizeof(CMD_LINE_OPT_NO_NUMA))) {
-				printf("%s\n", str6);
-				numa_on = 0;
+		case CMD_LINE_OPT_CONFIG_NUM:
+			ret = parse_config(optarg);
+			if (ret) {
+				printf("%s\n", str5);
+				print_usage(prgname);
+				return -1;
 			}
+			break;
 
-			if (!strncmp(lgopts[option_index].name,
-				CMD_LINE_OPT_IPV6,
-				sizeof(CMD_LINE_OPT_IPV6))) {
-				printf("%sn", str7);
-				ipv6 = 1;
-			}
+		case CMD_LINE_OPT_ETH_DEST_NUM:
+			parse_eth_dest(optarg);
+			break;
 
-			if (!strncmp(lgopts[option_index].name,
-					CMD_LINE_OPT_ENABLE_JUMBO,
-					sizeof(CMD_LINE_OPT_ENABLE_JUMBO))) {
-				struct option lenopts = {
-					"max-pkt-len", required_argument, 0, 0
-				};
-
-				printf("%s\n", str8);
-				port_conf.rxmode.jumbo_frame = 1;
-
-				/*
-				 * if no max-pkt-len set, use the default
-				 * value ETHER_MAX_LEN.
-				 */
-				if (0 == getopt_long(argc, argvopt, "",
-						&lenopts, &option_index)) {
-					ret = parse_max_pkt_len(optarg);
-					if ((ret < 64) ||
-						(ret > MAX_JUMBO_PKT_LEN)) {
-						printf("%s\n", str9);
-						print_usage(prgname);
-						return -1;
-					}
-					port_conf.rxmode.max_rx_pkt_len = ret;
-				}
-				printf("%s %u\n", str10,
-				(unsigned int)port_conf.rxmode.max_rx_pkt_len);
-			}
+		case CMD_LINE_OPT_NO_NUMA_NUM:
+			printf("%s\n", str6);
+			numa_on = 0;
+			break;
 
-			if (!strncmp(lgopts[option_index].name,
-				CMD_LINE_OPT_HASH_ENTRY_NUM,
-				sizeof(CMD_LINE_OPT_HASH_ENTRY_NUM))) {
+		case CMD_LINE_OPT_IPV6_NUM:
+			printf("%sn", str7);
+			ipv6 = 1;
+			break;
 
-				ret = parse_hash_entry_number(optarg);
-				if ((ret > 0) && (ret <= L3FWD_HASH_ENTRIES)) {
-					hash_entry_number = ret;
-				} else {
-					printf("%s\n", str11);
+		case CMD_LINE_OPT_ENABLE_JUMBO_NUM: {
+			struct option lenopts = {
+				"max-pkt-len", required_argument, 0, 0
+			};
+
+			printf("%s\n", str8);
+			port_conf.rxmode.jumbo_frame = 1;
+
+			/*
+			 * if no max-pkt-len set, use the default
+			 * value ETHER_MAX_LEN.
+			 */
+			if (0 == getopt_long(argc, argvopt, "",
+					&lenopts, &option_index)) {
+				ret = parse_max_pkt_len(optarg);
+				if ((ret < 64) ||
+					(ret > MAX_JUMBO_PKT_LEN)) {
+					printf("%s\n", str9);
 					print_usage(prgname);
 					return -1;
 				}
+				port_conf.rxmode.max_rx_pkt_len = ret;
 			}
+			printf("%s %u\n", str10,
+				(unsigned int)port_conf.rxmode.max_rx_pkt_len);
+			break;
+		}
 
-			if (!strncmp(lgopts[option_index].name,
-				     CMD_LINE_OPT_PARSE_PTYPE,
-				     sizeof(CMD_LINE_OPT_PARSE_PTYPE))) {
-				printf("soft parse-ptype is enabled\n");
-				parse_ptype = 1;
+		case CMD_LINE_OPT_HASH_ENTRY_NUM_NUM:
+			ret = parse_hash_entry_number(optarg);
+			if ((ret > 0) && (ret <= L3FWD_HASH_ENTRIES)) {
+				hash_entry_number = ret;
+			} else {
+				printf("%s\n", str11);
+				print_usage(prgname);
+				return -1;
 			}
+			break;
+
+		case CMD_LINE_OPT_PARSE_PTYPE_NUM:
+			printf("soft parse-ptype is enabled\n");
+			parse_ptype = 1;
 
 			break;
 
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC 5/7] l3fwd: new parameter to set mbuf pool ops
  2016-09-19 13:42 [RFC 0/7] changing mbuf pool handler Olivier Matz
                   ` (3 preceding siblings ...)
  2016-09-19 13:42 ` [RFC 4/7] l3fwd: rework long options parsing Olivier Matz
@ 2016-09-19 13:42 ` Olivier Matz
  2016-09-19 13:42 ` [RFC 6/7] l2fwd: rework long options parsing Olivier Matz
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 15+ messages in thread
From: Olivier Matz @ 2016-09-19 13:42 UTC (permalink / raw)
  To: dev; +Cc: jerin.jacob, hemant.agrawal, david.hunt

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 examples/l3fwd/main.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 9894a3b..e45e87b 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -104,6 +104,7 @@ static int l3fwd_em_on;
 static int numa_on = 1; /**< NUMA is enabled by default. */
 static int parse_ptype; /**< Parse packet type using rx callback, and */
 			/**< disabled by default */
+static const char *mbuf_pool_ops; /**< mbuf pool handler */
 
 /* Global variables. */
 
@@ -488,6 +489,8 @@ static const char short_options[] =
 #define CMD_LINE_OPT_ENABLE_JUMBO "enable-jumbo"
 #define CMD_LINE_OPT_HASH_ENTRY_NUM "hash-entry-num"
 #define CMD_LINE_OPT_PARSE_PTYPE "parse-ptype"
+#define CMD_LINE_OPT_MBUF_POOL_OPS "mbuf-pool-ops"
+
 enum {
 	/* long options mapped to a short option */
 
@@ -501,6 +504,7 @@ enum {
 	CMD_LINE_OPT_ENABLE_JUMBO_NUM,
 	CMD_LINE_OPT_HASH_ENTRY_NUM_NUM,
 	CMD_LINE_OPT_PARSE_PTYPE_NUM,
+	CMD_LINE_OPT_MBUF_POOL_OPS_NUM,
 };
 
 static const struct option lgopts[] = {
@@ -511,6 +515,7 @@ static const struct option lgopts[] = {
 	{CMD_LINE_OPT_ENABLE_JUMBO, 0, 0, CMD_LINE_OPT_ENABLE_JUMBO_NUM},
 	{CMD_LINE_OPT_HASH_ENTRY_NUM, 1, 0, CMD_LINE_OPT_HASH_ENTRY_NUM_NUM},
 	{CMD_LINE_OPT_PARSE_PTYPE, 0, 0, CMD_LINE_OPT_PARSE_PTYPE_NUM},
+	{CMD_LINE_OPT_MBUF_POOL_OPS, 1, 0, CMD_LINE_OPT_MBUF_POOL_OPS_NUM},
 	{NULL, 0, 0, 0}
 };
 
@@ -650,7 +655,12 @@ parse_args(int argc, char **argv)
 		case CMD_LINE_OPT_PARSE_PTYPE_NUM:
 			printf("soft parse-ptype is enabled\n");
 			parse_ptype = 1;
+			break;
+
 
+		case CMD_LINE_OPT_MBUF_POOL_OPS_NUM:
+			printf("set mbuf pool ops to <%s>\n", optarg);
+			mbuf_pool_ops = strdup(optarg);
 			break;
 
 		default:
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC 6/7] l2fwd: rework long options parsing
  2016-09-19 13:42 [RFC 0/7] changing mbuf pool handler Olivier Matz
                   ` (4 preceding siblings ...)
  2016-09-19 13:42 ` [RFC 5/7] l3fwd: new parameter to set mbuf pool ops Olivier Matz
@ 2016-09-19 13:42 ` Olivier Matz
  2016-09-19 13:42 ` [RFC 7/7] l2fwd: new parameter to set mbuf pool ops Olivier Matz
  2016-09-22 11:52 ` [RFC 0/7] changing mbuf pool handler Hemant Agrawal
  7 siblings, 0 replies; 15+ messages in thread
From: Olivier Matz @ 2016-09-19 13:42 UTC (permalink / raw)
  To: dev; +Cc: jerin.jacob, hemant.agrawal, david.hunt

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 examples/l2fwd/main.c | 26 ++++++++++++++++++--------
 1 file changed, 18 insertions(+), 8 deletions(-)

diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c
index 41ac1e1..028900b 100644
--- a/examples/l2fwd/main.c
+++ b/examples/l2fwd/main.c
@@ -376,6 +376,24 @@ l2fwd_parse_timer_period(const char *q_arg)
 	return n;
 }
 
+static const char short_options[] =
+	"p:"  /* portmask */
+	"q:"  /* number of queues */
+	"T:"  /* timer period */
+	;
+
+enum {
+	/* long options mapped to a short option */
+
+	/* first long only option value must be >= 256, so that we won't
+	 * conflict with short options */
+	CMD_LINE_OPT_MIN_NUM = 256,
+};
+
+static const struct option lgopts[] = {
+	{NULL, 0, 0, 0}
+};
+
 /* Parse the argument given in the command line of the application */
 static int
 l2fwd_parse_args(int argc, char **argv)
@@ -384,9 +402,6 @@ l2fwd_parse_args(int argc, char **argv)
 	char **argvopt;
 	int option_index;
 	char *prgname = argv[0];
-	static struct option lgopts[] = {
-		{NULL, 0, 0, 0}
-	};
 
 	argvopt = argv;
 
@@ -425,11 +440,6 @@ l2fwd_parse_args(int argc, char **argv)
 			timer_period = timer_secs;
 			break;
 
-		/* long options */
-		case 0:
-			l2fwd_usage(prgname);
-			return -1;
-
 		default:
 			l2fwd_usage(prgname);
 			return -1;
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC 7/7] l2fwd: new parameter to set mbuf pool ops
  2016-09-19 13:42 [RFC 0/7] changing mbuf pool handler Olivier Matz
                   ` (5 preceding siblings ...)
  2016-09-19 13:42 ` [RFC 6/7] l2fwd: rework long options parsing Olivier Matz
@ 2016-09-19 13:42 ` Olivier Matz
  2016-09-22 11:52 ` [RFC 0/7] changing mbuf pool handler Hemant Agrawal
  7 siblings, 0 replies; 15+ messages in thread
From: Olivier Matz @ 2016-09-19 13:42 UTC (permalink / raw)
  To: dev; +Cc: jerin.jacob, hemant.agrawal, david.hunt

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 examples/l2fwd/main.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c
index 028900b..dfa622f 100644
--- a/examples/l2fwd/main.c
+++ b/examples/l2fwd/main.c
@@ -138,6 +138,8 @@ struct l2fwd_port_statistics port_statistics[RTE_MAX_ETHPORTS];
 /* A tsc-based timer responsible for triggering statistics printout */
 static uint64_t timer_period = 10; /* default period is 10 seconds */
 
+static const char *mbuf_pool_ops; /**< mbuf pool handler */
+
 /* Print out statistics on packets dropped */
 static void
 print_stats(void)
@@ -382,15 +384,19 @@ static const char short_options[] =
 	"T:"  /* timer period */
 	;
 
+#define CMD_LINE_OPT_MBUF_POOL_OPS "mbuf-pool-ops"
+
 enum {
 	/* long options mapped to a short option */
 
 	/* first long only option value must be >= 256, so that we won't
 	 * conflict with short options */
 	CMD_LINE_OPT_MIN_NUM = 256,
+	CMD_LINE_OPT_MBUF_POOL_OPS_NUM,
 };
 
 static const struct option lgopts[] = {
+	{CMD_LINE_OPT_MBUF_POOL_OPS, 1, 0, CMD_LINE_OPT_MBUF_POOL_OPS_NUM},
 	{NULL, 0, 0, 0}
 };
 
@@ -440,6 +446,10 @@ l2fwd_parse_args(int argc, char **argv)
 			timer_period = timer_secs;
 			break;
 
+		case CMD_LINE_OPT_MBUF_POOL_OPS_NUM:
+			mbuf_pool_ops = strdup(optarg);
+			break;
+
 		default:
 			l2fwd_usage(prgname);
 			return -1;
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [RFC 0/7] changing mbuf pool handler
  2016-09-19 13:42 [RFC 0/7] changing mbuf pool handler Olivier Matz
                   ` (6 preceding siblings ...)
  2016-09-19 13:42 ` [RFC 7/7] l2fwd: new parameter to set mbuf pool ops Olivier Matz
@ 2016-09-22 11:52 ` Hemant Agrawal
  2016-10-03 15:49   ` Olivier Matz
  7 siblings, 1 reply; 15+ messages in thread
From: Hemant Agrawal @ 2016-09-22 11:52 UTC (permalink / raw)
  To: Olivier Matz, dev; +Cc: jerin.jacob, david.hunt

Hi Olivier

On 9/19/2016 7:12 PM, Olivier Matz wrote:
> Hello,
>
> Following discussion from [1] ("usages issue with external mempool").
>
> This is a tentative to make the mempool_ops feature introduced
> by David Hunt [2] more widely used by applications.
>
> It applies on top of a minor fix in mbuf lib [3].
>
> To sumarize the needs (please comment if I did not got it properly):
>
> - new hw-assisted mempool handlers will soon be introduced
> - to make use of it, the new mempool API [4] (rte_mempool_create_empty,
>   rte_mempool_populate, ...) has to be used
> - the legacy mempool API (rte_mempool_create) does not allow to change
>   the mempool ops. The default is "ring_<s|m>p_<s|m>c" depending on
>   flags.
> - the mbuf helper (rte_pktmbuf_pool_create) does not allow to change
>   them either, and the default is RTE_MBUF_DEFAULT_MEMPOOL_OPS
>   ("ring_mp_mc")
> - today, most (if not all) applications and examples use either
>   rte_pktmbuf_pool_create or rte_mempool_create to create the mbuf
>   pool, making it difficult to take advantage of this feature with
>   existing apps.
>
> My initial idea was to deprecate both rte_pktmbuf_pool_create() and
> rte_mempool_create(), forcing the applications to use the new API, which
> is more flexible. But after digging a bit, it appeared that
> rte_mempool_create() is widely used, and not only for mbufs. Deprecating
> it would have a big impact on applications, and replacing it with the
> new API would be overkill in many use-cases.

I agree with the proposal.

>
> So I finally tried the following approach (inspired from a suggestion
> Jerin [5]):
>
> - add a new mempool_ops parameter to rte_pktmbuf_pool_create(). This
>   unfortunatelly breaks the API, but I implemented an ABI compat layer.
>   If the patch is accepted, we could discuss how to announce/schedule
>   the API change.
> - update the applications and documentation to prefer
>   rte_pktmbuf_pool_create() as much as possible
> - update most used examples (testpmd, l2fwd, l3fwd) to add a new command
>   line argument to select the mempool handler
>
> I hope the external applications would then switch to
> rte_pktmbuf_pool_create(), since it supports most of the use-cases (even
> priv_size != 0, since we can call rte_mempool_obj_iter() after) .
>

I will still prefer if you can add the "rte_mempool_obj_cb_t *obj_cb, 
void *obj_cb_arg" into "rte_pktmbuf_pool_create". This single 
consolidated wrapper will almost make it certain that applications will 
not try to use rte_mempool_create for packet buffers.



> Comments are of course welcome. Note: the patchset is not really
> tested yet.
>
>
> Thanks,
> Olivier
>
> [1] http://dpdk.org/ml/archives/dev/2016-July/044734.html
> [2] http://dpdk.org/ml/archives/dev/2016-June/042423.html
> [3] http://www.dpdk.org/dev/patchwork/patch/15923/
> [4] http://dpdk.org/ml/archives/dev/2016-May/039229.html
> [5] http://dpdk.org/ml/archives/dev/2016-July/044779.html
>
>
> Olivier Matz (7):
>   mbuf: set the handler at mbuf pool creation
>   mbuf: use helper to create the pool
>   testpmd: new parameter to set mbuf pool ops
>   l3fwd: rework long options parsing
>   l3fwd: new parameter to set mbuf pool ops
>   l2fwd: rework long options parsing
>   l2fwd: new parameter to set mbuf pool ops
>
>  app/pdump/main.c                                   |   2 +-
>  app/test-pipeline/init.c                           |   3 +-
>  app/test-pmd/parameters.c                          |   5 +
>  app/test-pmd/testpmd.c                             |  16 +-
>  app/test-pmd/testpmd.h                             |   1 +
>  app/test/test_cryptodev.c                          |   2 +-
>  app/test/test_cryptodev_perf.c                     |   2 +-
>  app/test/test_distributor.c                        |   2 +-
>  app/test/test_distributor_perf.c                   |   2 +-
>  app/test/test_kni.c                                |   2 +-
>  app/test/test_link_bonding.c                       |   2 +-
>  app/test/test_link_bonding_mode4.c                 |   2 +-
>  app/test/test_link_bonding_rssconf.c               |  11 +-
>  app/test/test_mbuf.c                               |   6 +-
>  app/test/test_pmd_perf.c                           |   3 +-
>  app/test/test_pmd_ring.c                           |   2 +-
>  app/test/test_reorder.c                            |   2 +-
>  app/test/test_sched.c                              |   2 +-
>  app/test/test_table.c                              |   2 +-
>  doc/guides/prog_guide/mbuf_lib.rst                 |   2 +-
>  doc/guides/sample_app_ug/ip_reassembly.rst         |  13 +-
>  doc/guides/sample_app_ug/ipv4_multicast.rst        |  12 +-
>  doc/guides/sample_app_ug/l2_forward_job_stats.rst  |  33 ++--
>  .../sample_app_ug/l2_forward_real_virtual.rst      |  26 ++-
>  doc/guides/sample_app_ug/ptpclient.rst             |  12 +-
>  doc/guides/sample_app_ug/quota_watermark.rst       |  26 ++-
>  drivers/net/bonding/rte_eth_bond_8023ad.c          |  13 +-
>  drivers/net/bonding/rte_eth_bond_alb.c             |   2 +-
>  examples/bond/main.c                               |   2 +-
>  examples/distributor/main.c                        |   2 +-
>  examples/dpdk_qat/main.c                           |   3 +-
>  examples/ethtool/ethtool-app/main.c                |   4 +-
>  examples/exception_path/main.c                     |   3 +-
>  examples/ip_fragmentation/main.c                   |   4 +-
>  examples/ip_pipeline/init.c                        |  19 ++-
>  examples/ip_reassembly/main.c                      |  16 +-
>  examples/ipsec-secgw/ipsec-secgw.c                 |   2 +-
>  examples/ipv4_multicast/main.c                     |   6 +-
>  examples/kni/main.c                                |   2 +-
>  examples/l2fwd-cat/l2fwd-cat.c                     |   3 +-
>  examples/l2fwd-crypto/main.c                       |   2 +-
>  examples/l2fwd-jobstats/main.c                     |   2 +-
>  examples/l2fwd-keepalive/main.c                    |   2 +-
>  examples/l2fwd/main.c                              |  36 ++++-
>  examples/l3fwd-acl/main.c                          |   2 +-
>  examples/l3fwd-power/main.c                        |   2 +-
>  examples/l3fwd-vf/main.c                           |   2 +-
>  examples/l3fwd/main.c                              | 180 +++++++++++----------
>  examples/link_status_interrupt/main.c              |   2 +-
>  examples/load_balancer/init.c                      |   2 +-
>  .../client_server_mp/mp_server/init.c              |   3 +-
>  examples/multi_process/l2fwd_fork/main.c           |  14 +-
>  examples/multi_process/symmetric_mp/main.c         |   2 +-
>  examples/netmap_compat/bridge/bridge.c             |   2 +-
>  examples/packet_ordering/main.c                    |   2 +-
>  examples/performance-thread/l3fwd-thread/main.c    |   2 +-
>  examples/ptpclient/ptpclient.c                     |   3 +-
>  examples/qos_meter/main.c                          |   2 +-
>  examples/qos_sched/init.c                          |   2 +-
>  examples/quota_watermark/qw/main.c                 |   2 +-
>  examples/rxtx_callbacks/main.c                     |   2 +-
>  examples/skeleton/basicfwd.c                       |   3 +-
>  examples/tep_termination/main.c                    |  17 +-
>  examples/vhost/main.c                              |   2 +-
>  examples/vhost_xen/main.c                          |   2 +-
>  examples/vmdq/main.c                               |   2 +-
>  examples/vmdq_dcb/main.c                           |   2 +-
>  lib/librte_mbuf/rte_mbuf.c                         |  34 +++-
>  lib/librte_mbuf/rte_mbuf.h                         |  44 +++--
>  lib/librte_mbuf/rte_mbuf_version.map               |   7 +
>  70 files changed, 366 insertions(+), 289 deletions(-)
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC 0/7] changing mbuf pool handler
  2016-09-22 11:52 ` [RFC 0/7] changing mbuf pool handler Hemant Agrawal
@ 2016-10-03 15:49   ` Olivier Matz
  2016-10-05  9:41     ` Hunt, David
  0 siblings, 1 reply; 15+ messages in thread
From: Olivier Matz @ 2016-10-03 15:49 UTC (permalink / raw)
  To: Hemant Agrawal, dev; +Cc: jerin.jacob, david.hunt

Hi Hemant,

Thank you for your feedback.

On 09/22/2016 01:52 PM, Hemant Agrawal wrote:
> Hi Olivier
> 
> On 9/19/2016 7:12 PM, Olivier Matz wrote:
>> Hello,
>>
>> Following discussion from [1] ("usages issue with external mempool").
>>
>> This is a tentative to make the mempool_ops feature introduced
>> by David Hunt [2] more widely used by applications.
>>
>> It applies on top of a minor fix in mbuf lib [3].
>>
>> To sumarize the needs (please comment if I did not got it properly):
>>
>> - new hw-assisted mempool handlers will soon be introduced
>> - to make use of it, the new mempool API [4] (rte_mempool_create_empty,
>>   rte_mempool_populate, ...) has to be used
>> - the legacy mempool API (rte_mempool_create) does not allow to change
>>   the mempool ops. The default is "ring_<s|m>p_<s|m>c" depending on
>>   flags.
>> - the mbuf helper (rte_pktmbuf_pool_create) does not allow to change
>>   them either, and the default is RTE_MBUF_DEFAULT_MEMPOOL_OPS
>>   ("ring_mp_mc")
>> - today, most (if not all) applications and examples use either
>>   rte_pktmbuf_pool_create or rte_mempool_create to create the mbuf
>>   pool, making it difficult to take advantage of this feature with
>>   existing apps.
>>
>> My initial idea was to deprecate both rte_pktmbuf_pool_create() and
>> rte_mempool_create(), forcing the applications to use the new API, which
>> is more flexible. But after digging a bit, it appeared that
>> rte_mempool_create() is widely used, and not only for mbufs. Deprecating
>> it would have a big impact on applications, and replacing it with the
>> new API would be overkill in many use-cases.
> 
> I agree with the proposal.
> 
>>
>> So I finally tried the following approach (inspired from a suggestion
>> Jerin [5]):
>>
>> - add a new mempool_ops parameter to rte_pktmbuf_pool_create(). This
>>   unfortunatelly breaks the API, but I implemented an ABI compat layer.
>>   If the patch is accepted, we could discuss how to announce/schedule
>>   the API change.
>> - update the applications and documentation to prefer
>>   rte_pktmbuf_pool_create() as much as possible
>> - update most used examples (testpmd, l2fwd, l3fwd) to add a new command
>>   line argument to select the mempool handler
>>
>> I hope the external applications would then switch to
>> rte_pktmbuf_pool_create(), since it supports most of the use-cases (even
>> priv_size != 0, since we can call rte_mempool_obj_iter() after) .
>>
> 
> I will still prefer if you can add the "rte_mempool_obj_cb_t *obj_cb,
> void *obj_cb_arg" into "rte_pktmbuf_pool_create". This single
> consolidated wrapper will almost make it certain that applications will
> not try to use rte_mempool_create for packet buffers.

The patch changes the example applications. I'm not sure I understand
why adding these arguments would force application to not use
rte_mempool_create() for packet buffers. Do you have a application in mind?

For the mempool_ops parameter, we must pass it at init because we need
to know the mempool handler before populating the pool. For object
initialization, it can be done after, so I thought it was better to
reduce the number of arguments to avoid to fall in the mempool_create()
syndrom :)

Any other opinions?

Regards,
Olivier

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC 0/7] changing mbuf pool handler
  2016-10-03 15:49   ` Olivier Matz
@ 2016-10-05  9:41     ` Hunt, David
  2016-10-05 11:49       ` Hemant Agrawal
  0 siblings, 1 reply; 15+ messages in thread
From: Hunt, David @ 2016-10-05  9:41 UTC (permalink / raw)
  To: Olivier Matz, Hemant Agrawal, dev; +Cc: jerin.jacob

Hi Olivier,


On 3/10/2016 4:49 PM, Olivier Matz wrote:
> Hi Hemant,
>
> Thank you for your feedback.
>
> On 09/22/2016 01:52 PM, Hemant Agrawal wrote:
>> Hi Olivier
>>
>> On 9/19/2016 7:12 PM, Olivier Matz wrote:
>>> Hello,
>>>
>>> Following discussion from [1] ("usages issue with external mempool").
>>>
>>> This is a tentative to make the mempool_ops feature introduced
>>> by David Hunt [2] more widely used by applications.
>>>
>>> It applies on top of a minor fix in mbuf lib [3].
>>>
>>> To sumarize the needs (please comment if I did not got it properly):
>>>
>>> - new hw-assisted mempool handlers will soon be introduced
>>> - to make use of it, the new mempool API [4] (rte_mempool_create_empty,
>>>    rte_mempool_populate, ...) has to be used
>>> - the legacy mempool API (rte_mempool_create) does not allow to change
>>>    the mempool ops. The default is "ring_<s|m>p_<s|m>c" depending on
>>>    flags.
>>> - the mbuf helper (rte_pktmbuf_pool_create) does not allow to change
>>>    them either, and the default is RTE_MBUF_DEFAULT_MEMPOOL_OPS
>>>    ("ring_mp_mc")
>>> - today, most (if not all) applications and examples use either
>>>    rte_pktmbuf_pool_create or rte_mempool_create to create the mbuf
>>>    pool, making it difficult to take advantage of this feature with
>>>    existing apps.
>>>
>>> My initial idea was to deprecate both rte_pktmbuf_pool_create() and
>>> rte_mempool_create(), forcing the applications to use the new API, which
>>> is more flexible. But after digging a bit, it appeared that
>>> rte_mempool_create() is widely used, and not only for mbufs. Deprecating
>>> it would have a big impact on applications, and replacing it with the
>>> new API would be overkill in many use-cases.
>> I agree with the proposal.
>>
>>> So I finally tried the following approach (inspired from a suggestion
>>> Jerin [5]):
>>>
>>> - add a new mempool_ops parameter to rte_pktmbuf_pool_create(). This
>>>    unfortunatelly breaks the API, but I implemented an ABI compat layer.
>>>    If the patch is accepted, we could discuss how to announce/schedule
>>>    the API change.
>>> - update the applications and documentation to prefer
>>>    rte_pktmbuf_pool_create() as much as possible
>>> - update most used examples (testpmd, l2fwd, l3fwd) to add a new command
>>>    line argument to select the mempool handler
>>>
>>> I hope the external applications would then switch to
>>> rte_pktmbuf_pool_create(), since it supports most of the use-cases (even
>>> priv_size != 0, since we can call rte_mempool_obj_iter() after) .
>>>
>> I will still prefer if you can add the "rte_mempool_obj_cb_t *obj_cb,
>> void *obj_cb_arg" into "rte_pktmbuf_pool_create". This single
>> consolidated wrapper will almost make it certain that applications will
>> not try to use rte_mempool_create for packet buffers.
> The patch changes the example applications. I'm not sure I understand
> why adding these arguments would force application to not use
> rte_mempool_create() for packet buffers. Do you have a application in mind?
>
> For the mempool_ops parameter, we must pass it at init because we need
> to know the mempool handler before populating the pool. For object
> initialization, it can be done after, so I thought it was better to
> reduce the number of arguments to avoid to fall in the mempool_create()
> syndrom :)

I also agree with the proposal. Looks cleaner.

I would lean to the side of keeping the parameters to the minimum, i.e. 
not adding *obj_cb and *obj_cb_arg into rte_pktmbuf_pool_create. 
Developers always have the option of going with rte_mempool_create if 
they need more fine-grained control.

Regards,
Dave.

> Any other opinions?
>
> Regards,
> Olivier

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC 0/7] changing mbuf pool handler
  2016-10-05  9:41     ` Hunt, David
@ 2016-10-05 11:49       ` Hemant Agrawal
  2016-10-05 13:15         ` Hunt, David
  0 siblings, 1 reply; 15+ messages in thread
From: Hemant Agrawal @ 2016-10-05 11:49 UTC (permalink / raw)
  To: Hunt, David, Olivier Matz, dev; +Cc: jerin.jacob

Hi Olivier,

> -----Original Message-----
> From: Hunt, David [mailto:david.hunt@intel.com]
> Hi Olivier,
> 
> 
> On 3/10/2016 4:49 PM, Olivier Matz wrote:
> > Hi Hemant,
> >
> > Thank you for your feedback.
> >
> > On 09/22/2016 01:52 PM, Hemant Agrawal wrote:
> >> Hi Olivier
> >>
> >> On 9/19/2016 7:12 PM, Olivier Matz wrote:
> >>> Hello,
> >>>
> >>> Following discussion from [1] ("usages issue with external mempool").
> >>>
> >>> This is a tentative to make the mempool_ops feature introduced by
> >>> David Hunt [2] more widely used by applications.
> >>>
> >>> It applies on top of a minor fix in mbuf lib [3].
> >>>
> >>> To sumarize the needs (please comment if I did not got it properly):
> >>>
> >>> - new hw-assisted mempool handlers will soon be introduced
> >>> - to make use of it, the new mempool API [4]
> (rte_mempool_create_empty,
> >>>    rte_mempool_populate, ...) has to be used
> >>> - the legacy mempool API (rte_mempool_create) does not allow to
> change
> >>>    the mempool ops. The default is "ring_<s|m>p_<s|m>c" depending on
> >>>    flags.
> >>> - the mbuf helper (rte_pktmbuf_pool_create) does not allow to change
> >>>    them either, and the default is RTE_MBUF_DEFAULT_MEMPOOL_OPS
> >>>    ("ring_mp_mc")
> >>> - today, most (if not all) applications and examples use either
> >>>    rte_pktmbuf_pool_create or rte_mempool_create to create the mbuf
> >>>    pool, making it difficult to take advantage of this feature with
> >>>    existing apps.
> >>>
> >>> My initial idea was to deprecate both rte_pktmbuf_pool_create() and
> >>> rte_mempool_create(), forcing the applications to use the new API,
> >>> which is more flexible. But after digging a bit, it appeared that
> >>> rte_mempool_create() is widely used, and not only for mbufs.
> >>> Deprecating it would have a big impact on applications, and
> >>> replacing it with the new API would be overkill in many use-cases.
> >> I agree with the proposal.
> >>
> >>> So I finally tried the following approach (inspired from a
> >>> suggestion Jerin [5]):
> >>>
> >>> - add a new mempool_ops parameter to rte_pktmbuf_pool_create().
> This
> >>>    unfortunatelly breaks the API, but I implemented an ABI compat layer.
> >>>    If the patch is accepted, we could discuss how to announce/schedule
> >>>    the API change.
> >>> - update the applications and documentation to prefer
> >>>    rte_pktmbuf_pool_create() as much as possible
> >>> - update most used examples (testpmd, l2fwd, l3fwd) to add a new
> command
> >>>    line argument to select the mempool handler
> >>>
> >>> I hope the external applications would then switch to
> >>> rte_pktmbuf_pool_create(), since it supports most of the use-cases
> >>> (even priv_size != 0, since we can call rte_mempool_obj_iter() after) .
> >>>
> >> I will still prefer if you can add the "rte_mempool_obj_cb_t *obj_cb,
> >> void *obj_cb_arg" into "rte_pktmbuf_pool_create". This single
> >> consolidated wrapper will almost make it certain that applications
> >> will not try to use rte_mempool_create for packet buffers.
> > The patch changes the example applications. I'm not sure I understand
> > why adding these arguments would force application to not use
> > rte_mempool_create() for packet buffers. Do you have a application in
> mind?
> >
> > For the mempool_ops parameter, we must pass it at init because we need
> > to know the mempool handler before populating the pool. For object
> > initialization, it can be done after, so I thought it was better to
> > reduce the number of arguments to avoid to fall in the
> > mempool_create() syndrom :)
> 
> I also agree with the proposal. Looks cleaner.
> 
> I would lean to the side of keeping the parameters to the minimum, i.e.
> not adding *obj_cb and *obj_cb_arg into rte_pktmbuf_pool_create.
> Developers always have the option of going with rte_mempool_create if they
> need more fine-grained control.

[Hemant] The implementations with hw offloaded mempools don't want developer using *rte_mempool_create* for packet buffer pools. 
This API does not work for hw offloaded mempool. 

Also, *rte_mempool_create_empty* - may not be convenient for many application, as it requires calling  4+ APIs.

Olivier is not in favor of deprecating the *rte_mempool_create*.   I agree with concerns raised by him. 

Essentially, I was suggesting to upgrade * rte_pktmbuf_pool_create* to be like *rte_mempool_create*  for packet buffers exclusively.

This will provide a clear segregation for API usages w.r.t the packet buffer pool vs all other type of mempools. 


Regards,
Hemant

> 
> Regards,
> Dave.
> 
> > Any other opinions?
> >
> > Regards,
> > Olivier

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC 0/7] changing mbuf pool handler
  2016-10-05 11:49       ` Hemant Agrawal
@ 2016-10-05 13:15         ` Hunt, David
  0 siblings, 0 replies; 15+ messages in thread
From: Hunt, David @ 2016-10-05 13:15 UTC (permalink / raw)
  To: Hemant Agrawal, Olivier Matz, dev; +Cc: jerin.jacob



On 5/10/2016 12:49 PM, Hemant Agrawal wrote:
> Hi Olivier,
>
>> -----Original Message-----
>> From: Hunt, David [mailto:david.hunt@intel.com]
>> Hi Olivier,
>>
>>
>> On 3/10/2016 4:49 PM, Olivier Matz wrote:
>>> Hi Hemant,
>>>
>>> Thank you for your feedback.
>>>
>>> On 09/22/2016 01:52 PM, Hemant Agrawal wrote:
>>>> Hi Olivier
>>>>
>>>> On 9/19/2016 7:12 PM, Olivier Matz wrote:
>>>>> Hello,
>>>>>
>>>>> Following discussion from [1] ("usages issue with external mempool").
>>>>>
>>>>> This is a tentative to make the mempool_ops feature introduced by
>>>>> David Hunt [2] more widely used by applications.
>>>>>
>>>>> It applies on top of a minor fix in mbuf lib [3].
>>>>>
>>>>> To sumarize the needs (please comment if I did not got it properly):
>>>>>
>>>>> - new hw-assisted mempool handlers will soon be introduced
>>>>> - to make use of it, the new mempool API [4]
>> (rte_mempool_create_empty,
>>>>>     rte_mempool_populate, ...) has to be used
>>>>> - the legacy mempool API (rte_mempool_create) does not allow to
>> change
>>>>>     the mempool ops. The default is "ring_<s|m>p_<s|m>c" depending on
>>>>>     flags.
>>>>> - the mbuf helper (rte_pktmbuf_pool_create) does not allow to change
>>>>>     them either, and the default is RTE_MBUF_DEFAULT_MEMPOOL_OPS
>>>>>     ("ring_mp_mc")
>>>>> - today, most (if not all) applications and examples use either
>>>>>     rte_pktmbuf_pool_create or rte_mempool_create to create the mbuf
>>>>>     pool, making it difficult to take advantage of this feature with
>>>>>     existing apps.
>>>>>
>>>>> My initial idea was to deprecate both rte_pktmbuf_pool_create() and
>>>>> rte_mempool_create(), forcing the applications to use the new API,
>>>>> which is more flexible. But after digging a bit, it appeared that
>>>>> rte_mempool_create() is widely used, and not only for mbufs.
>>>>> Deprecating it would have a big impact on applications, and
>>>>> replacing it with the new API would be overkill in many use-cases.
>>>> I agree with the proposal.
>>>>
>>>>> So I finally tried the following approach (inspired from a
>>>>> suggestion Jerin [5]):
>>>>>
>>>>> - add a new mempool_ops parameter to rte_pktmbuf_pool_create().
>> This
>>>>>     unfortunatelly breaks the API, but I implemented an ABI compat layer.
>>>>>     If the patch is accepted, we could discuss how to announce/schedule
>>>>>     the API change.
>>>>> - update the applications and documentation to prefer
>>>>>     rte_pktmbuf_pool_create() as much as possible
>>>>> - update most used examples (testpmd, l2fwd, l3fwd) to add a new
>> command
>>>>>     line argument to select the mempool handler
>>>>>
>>>>> I hope the external applications would then switch to
>>>>> rte_pktmbuf_pool_create(), since it supports most of the use-cases
>>>>> (even priv_size != 0, since we can call rte_mempool_obj_iter() after) .
>>>>>
>>>> I will still prefer if you can add the "rte_mempool_obj_cb_t *obj_cb,
>>>> void *obj_cb_arg" into "rte_pktmbuf_pool_create". This single
>>>> consolidated wrapper will almost make it certain that applications
>>>> will not try to use rte_mempool_create for packet buffers.
>>> The patch changes the example applications. I'm not sure I understand
>>> why adding these arguments would force application to not use
>>> rte_mempool_create() for packet buffers. Do you have a application in
>> mind?
>>> For the mempool_ops parameter, we must pass it at init because we need
>>> to know the mempool handler before populating the pool. For object
>>> initialization, it can be done after, so I thought it was better to
>>> reduce the number of arguments to avoid to fall in the
>>> mempool_create() syndrom :)
>> I also agree with the proposal. Looks cleaner.
>>
>> I would lean to the side of keeping the parameters to the minimum, i.e.
>> not adding *obj_cb and *obj_cb_arg into rte_pktmbuf_pool_create.
>> Developers always have the option of going with rte_mempool_create if they
>> need more fine-grained control.
> [Hemant] The implementations with hw offloaded mempools don't want developer using *rte_mempool_create* for packet buffer pools.
> This API does not work for hw offloaded mempool.
>
> Also, *rte_mempool_create_empty* - may not be convenient for many application, as it requires calling  4+ APIs.
>
> Olivier is not in favor of deprecating the *rte_mempool_create*.   I agree with concerns raised by him.
>
> Essentially, I was suggesting to upgrade * rte_pktmbuf_pool_create* to be like *rte_mempool_create*  for packet buffers exclusively.
>
> This will provide a clear segregation for API usages w.r.t the packet buffer pool vs all other type of mempools.

Yes, it does sound like we need those extra parameters on 
rte_pktmbuf_pool_create.

Regards,
Dave.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC 2/7] mbuf: use helper to create the pool
  2016-09-19 13:42 ` [RFC 2/7] mbuf: use helper to create the pool Olivier Matz
@ 2017-01-16 15:30   ` Santosh Shukla
  2017-01-31 10:31     ` Olivier Matz
  0 siblings, 1 reply; 15+ messages in thread
From: Santosh Shukla @ 2017-01-16 15:30 UTC (permalink / raw)
  To: Olivier Matz; +Cc: dev, jerin.jacob, hemant.agrawal, david.hunt

Hi Olivier,


On Mon, Sep 19, 2016 at 03:42:42PM +0200, Olivier Matz wrote:
> When possible, replace the uses of rte_mempool_create() with
> the helper provided in librte_mbuf: rte_pktmbuf_pool_create().
> 
> This is the preferred way to create a mbuf pool.
> 
> By the way, akso update the documentation.
>

I am working on ext-mempool pmd driver for cvm soc.
So interested in this thread.

Wondering why this thread not followed up, is it
because we don't want to deprecate rte_mempool_create()?
Or if we want to then in which release you are targeting.

Beside that some high level comment -
- Your changeset missing mempool test application i.e.. test_mempool.c/
  test_mempool_perf.c; Do you plan to accomodate them?
- ext-mempool does not necessarily need MBUF_CACHE_SIZE. Let HW-mngr to directly
  handover buffer to application; Rather caching same buffer per core way. It
  will save some cycles. What do you think?
- I figured out that ext-mempool API not mapping well on cvm hw; For few
  reason:
  Lets say application calls:
  rte_pktmbuf_pool_create()
   --> rte_mempool_create_empty()
   --> rte_mempool_ops_byname()
   --> rte_mempool_populate_default()
         -->> rte_mempool_ops_alloc()
                  --> ext-mempool-specific-pool-create handle

In my case: ext-mempool-pool-create handle will look for huge page mapped 
mz->vaddr/paddr, So to program HW-manager start/end addr of pool; And current
ext-mempool API doesn't support such case. Therefor I chose to add new ops
something like below, which could address such case; We'll soon post the patch.

/**                                                                             
 * Set the memzone va/pa addr range in the external pool.               
 */                                                                             
typedef void (*rte_mempool_populate_mz_range_t)(const struct rte_memzone *mz);  
                                                                                
/** Structure defining mempool operations structure */                          
struct rte_mempool_ops {                                                        
        char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */
        rte_mempool_alloc_t alloc;       /**< Allocate private data. */         
        rte_mempool_free_t free;         /**< Free the external pool. */        
        rte_mempool_enqueue_t enqueue;   /**< Enqueue an object. */             
        rte_mempool_dequeue_t dequeue;   /**< Dequeue an object. */             
        rte_mempool_get_count get_count; /**< Get qty of available objs. */     
        rte_mempool_populate_mz_range_t populate_mz_range; /**< set memzone     
                                                                per pool info */
} __rte_cache_aligned;                                                          

Let me know your opinion.

Thanks.
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> ---
>  app/test/test_link_bonding_rssconf.c               | 11 ++++----
>  doc/guides/prog_guide/mbuf_lib.rst                 |  2 +-
>  doc/guides/sample_app_ug/ip_reassembly.rst         | 13 +++++----
>  doc/guides/sample_app_ug/ipv4_multicast.rst        | 12 ++++----
>  doc/guides/sample_app_ug/l2_forward_job_stats.rst  | 33 ++++++++--------------
>  .../sample_app_ug/l2_forward_real_virtual.rst      | 26 +++++++----------
>  doc/guides/sample_app_ug/ptpclient.rst             | 12 ++------
>  doc/guides/sample_app_ug/quota_watermark.rst       | 26 ++++++-----------
>  drivers/net/bonding/rte_eth_bond_8023ad.c          | 13 ++++-----
>  examples/ip_pipeline/init.c                        | 19 ++++++-------
>  examples/ip_reassembly/main.c                      | 16 +++++------
>  examples/multi_process/l2fwd_fork/main.c           | 14 ++++-----
>  examples/tep_termination/main.c                    | 17 ++++++-----
>  lib/librte_mbuf/rte_mbuf.c                         |  7 +++--
>  lib/librte_mbuf/rte_mbuf.h                         | 29 +++++++++++--------
>  15 files changed, 111 insertions(+), 139 deletions(-)
> 
> diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
> index 34f1c16..dd1bcc7 100644
> --- a/app/test/test_link_bonding_rssconf.c
> +++ b/app/test/test_link_bonding_rssconf.c
> @@ -67,7 +67,7 @@
>  #define SLAVE_RXTX_QUEUE_FMT      ("rssconf_slave%d_q%d")
>  
>  #define NUM_MBUFS 8191
> -#define MBUF_SIZE (1600 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
> +#define MBUF_SIZE (1600 + RTE_PKTMBUF_HEADROOM)
>  #define MBUF_CACHE_SIZE 250
>  #define BURST_SIZE 32
>  
> @@ -536,13 +536,12 @@ test_setup(void)
>  
>  	if (test_params.mbuf_pool == NULL) {
>  
> -		test_params.mbuf_pool = rte_mempool_create("RSS_MBUF_POOL", NUM_MBUFS *
> -				SLAVE_COUNT, MBUF_SIZE, MBUF_CACHE_SIZE,
> -				sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init,
> -				NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);
> +		test_params.mbuf_pool = rte_pktmbuf_pool_create(
> +			"RSS_MBUF_POOL", NUM_MBUFS * SLAVE_COUNT,
> +			MBUF_CACHE_SIZE, 0, MBUF_SIZE, rte_socket_id(), NULL);
>  
>  		TEST_ASSERT(test_params.mbuf_pool != NULL,
> -				"rte_mempool_create failed\n");
> +				"rte_pktmbuf_pool_create failed\n");
>  	}
>  
>  	/* Create / initialize ring eth devs. */
> diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst
> index 8e61682..b366e04 100644
> --- a/doc/guides/prog_guide/mbuf_lib.rst
> +++ b/doc/guides/prog_guide/mbuf_lib.rst
> @@ -103,7 +103,7 @@ Constructors
>  Packet and control mbuf constructors are provided by the API.
>  The rte_pktmbuf_init() and rte_ctrlmbuf_init() functions initialize some fields in the mbuf structure that
>  are not modified by the user once created (mbuf type, origin pool, buffer start address, and so on).
> -This function is given as a callback function to the rte_mempool_create() function at pool creation time.
> +This function is given as a callback function to the rte_pktmbuf_pool_create() or the rte_mempool_create() function at pool creation time.
>  
>  Allocating and Freeing mbufs
>  ----------------------------
> diff --git a/doc/guides/sample_app_ug/ip_reassembly.rst b/doc/guides/sample_app_ug/ip_reassembly.rst
> index 3c5cc70..4b6023a 100644
> --- a/doc/guides/sample_app_ug/ip_reassembly.rst
> +++ b/doc/guides/sample_app_ug/ip_reassembly.rst
> @@ -223,11 +223,14 @@ each RX queue uses its own mempool.
>  
>      snprintf(buf, sizeof(buf), "mbuf_pool_%u_%u", lcore, queue);
>  
> -    if ((rxq->pool = rte_mempool_create(buf, nb_mbuf, MBUF_SIZE, 0, sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init, NULL,
> -        rte_pktmbuf_init, NULL, socket, MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) == NULL) {
> -
> -            RTE_LOG(ERR, IP_RSMBL, "mempool_create(%s) failed", buf);
> -            return -1;
> +    rxq->pool = rte_pktmbuf_pool_create(buf, nb_mbuf,
> +    	0, /* cache size */
> +    	0, /* priv size */
> +    	MBUF_DATA_SIZE, socket, "ring_sp_sc");
> +    if (rxq->pool == NULL) {
> +    	RTE_LOG(ERR, IP_RSMBL,
> +    		"rte_pktmbuf_pool_create(%s) failed", buf);
> +    	return -1;
>      }
>  
>  Packet Reassembly and Forwarding
> diff --git a/doc/guides/sample_app_ug/ipv4_multicast.rst b/doc/guides/sample_app_ug/ipv4_multicast.rst
> index 72da8c4..099d61a 100644
> --- a/doc/guides/sample_app_ug/ipv4_multicast.rst
> +++ b/doc/guides/sample_app_ug/ipv4_multicast.rst
> @@ -145,12 +145,12 @@ Memory pools for indirect buffers are initialized differently from the memory po
>  
>  .. code-block:: c
>  
> -    packet_pool = rte_mempool_create("packet_pool", NB_PKT_MBUF, PKT_MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private),
> -                                     rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);
> -
> -    header_pool = rte_mempool_create("header_pool", NB_HDR_MBUF, HDR_MBUF_SIZE, 32, 0, NULL, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);
> -    clone_pool = rte_mempool_create("clone_pool", NB_CLONE_MBUF,
> -    CLONE_MBUF_SIZE, 32, 0, NULL, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);
> +    packet_pool = rte_pktmbuf_pool_create("packet_pool", NB_PKT_MBUF, 32,
> +    	0, PKT_MBUF_DATA_SIZE, rte_socket_id(), NULL);
> +    header_pool = rte_pktmbuf_pool_create("header_pool", NB_HDR_MBUF, 32,
> +    	0, HDR_MBUF_DATA_SIZE, rte_socket_id(), NULL);
> +    clone_pool = rte_pktmbuf_pool_create("clone_pool", NB_CLONE_MBUF, 32,
> +    	0, 0, rte_socket_id(), NULL);
>  
>  The reason for this is because indirect buffers are not supposed to hold any packet data and
>  therefore can be initialized with lower amount of reserved memory for each buffer.
> diff --git a/doc/guides/sample_app_ug/l2_forward_job_stats.rst b/doc/guides/sample_app_ug/l2_forward_job_stats.rst
> index 2444e36..a1b3f43 100644
> --- a/doc/guides/sample_app_ug/l2_forward_job_stats.rst
> +++ b/doc/guides/sample_app_ug/l2_forward_job_stats.rst
> @@ -193,36 +193,25 @@ and the application to store network packet data:
>  .. code-block:: c
>  
>      /* create the mbuf pool */
> -    l2fwd_pktmbuf_pool =
> -        rte_mempool_create("mbuf_pool", NB_MBUF,
> -                   MBUF_SIZE, 32,
> -                   sizeof(struct rte_pktmbuf_pool_private),
> -                   rte_pktmbuf_pool_init, NULL,
> -                   rte_pktmbuf_init, NULL,
> -                   rte_socket_id(), 0);
> +    l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
> +    	MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
> +    	rte_socket_id(), NULL);
>  
>      if (l2fwd_pktmbuf_pool == NULL)
>          rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
>  
>  The rte_mempool is a generic structure used to handle pools of objects.
> -In this case, it is necessary to create a pool that will be used by the driver,
> -which expects to have some reserved space in the mempool structure,
> -sizeof(struct rte_pktmbuf_pool_private) bytes.
> -The number of allocated pkt mbufs is NB_MBUF, with a size of MBUF_SIZE each.
> -A per-lcore cache of 32 mbufs is kept.
> +In this case, it is necessary to create a pool that will be used by the driver.
> +The number of allocated pkt mbufs is NB_MBUF, with a data room size of
> +RTE_MBUF_DEFAULT_BUF_SIZE each.
> +A per-lcore cache of MEMPOOL_CACHE_SIZE mbufs is kept.
>  The memory is allocated in rte_socket_id() socket,
>  but it is possible to extend this code to allocate one mbuf pool per socket.
>  
> -Two callback pointers are also given to the rte_mempool_create() function:
> -
> -*   The first callback pointer is to rte_pktmbuf_pool_init() and is used
> -    to initialize the private data of the mempool, which is needed by the driver.
> -    This function is provided by the mbuf API, but can be copied and extended by the developer.
> -
> -*   The second callback pointer given to rte_mempool_create() is the mbuf initializer.
> -    The default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library.
> -    If a more complex application wants to extend the rte_pktmbuf structure for its own needs,
> -    a new function derived from rte_pktmbuf_init( ) can be created.
> +The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf
> +initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init().
> +An advanced application may want to use the mempool API to create the
> +mbuf pool with more control.
>  
>  Driver Initialization
>  ~~~~~~~~~~~~~~~~~~~~~
> diff --git a/doc/guides/sample_app_ug/l2_forward_real_virtual.rst b/doc/guides/sample_app_ug/l2_forward_real_virtual.rst
> index a1c10c0..2330148 100644
> --- a/doc/guides/sample_app_ug/l2_forward_real_virtual.rst
> +++ b/doc/guides/sample_app_ug/l2_forward_real_virtual.rst
> @@ -197,31 +197,25 @@ and the application to store network packet data:
>  
>      /* create the mbuf pool */
>  
> -    l2fwd_pktmbuf_pool = rte_mempool_create("mbuf_pool", NB_MBUF, MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private),
> -        rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, SOCKET0, 0);
> +    l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
> +    	MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
> +    	rte_socket_id(), NULL);
>  
>      if (l2fwd_pktmbuf_pool == NULL)
>          rte_panic("Cannot init mbuf pool\n");
>  
>  The rte_mempool is a generic structure used to handle pools of objects.
> -In this case, it is necessary to create a pool that will be used by the driver,
> -which expects to have some reserved space in the mempool structure,
> -sizeof(struct rte_pktmbuf_pool_private) bytes.
> -The number of allocated pkt mbufs is NB_MBUF, with a size of MBUF_SIZE each.
> +In this case, it is necessary to create a pool that will be used by the driver.
> +The number of allocated pkt mbufs is NB_MBUF, with a data room size of
> +RTE_MBUF_DEFAULT_BUF_SIZE each.
>  A per-lcore cache of 32 mbufs is kept.
>  The memory is allocated in NUMA socket 0,
>  but it is possible to extend this code to allocate one mbuf pool per socket.
>  
> -Two callback pointers are also given to the rte_mempool_create() function:
> -
> -*   The first callback pointer is to rte_pktmbuf_pool_init() and is used
> -    to initialize the private data of the mempool, which is needed by the driver.
> -    This function is provided by the mbuf API, but can be copied and extended by the developer.
> -
> -*   The second callback pointer given to rte_mempool_create() is the mbuf initializer.
> -    The default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library.
> -    If a more complex application wants to extend the rte_pktmbuf structure for its own needs,
> -    a new function derived from rte_pktmbuf_init( ) can be created.
> +The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf
> +initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init().
> +An advanced application may want to use the mempool API to create the
> +mbuf pool with more control.
>  
>  .. _l2_fwd_app_dvr_init:
>  
> diff --git a/doc/guides/sample_app_ug/ptpclient.rst b/doc/guides/sample_app_ug/ptpclient.rst
> index 6e425b7..4bd87c2 100644
> --- a/doc/guides/sample_app_ug/ptpclient.rst
> +++ b/doc/guides/sample_app_ug/ptpclient.rst
> @@ -171,15 +171,9 @@ used by the application:
>  
>  .. code-block:: c
>  
> -    mbuf_pool = rte_mempool_create("MBUF_POOL",
> -                                   NUM_MBUFS * nb_ports,
> -                                   MBUF_SIZE,
> -                                   MBUF_CACHE_SIZE,
> -                                   sizeof(struct rte_pktmbuf_pool_private),
> -                                   rte_pktmbuf_pool_init, NULL,
> -                                   rte_pktmbuf_init,      NULL,
> -                                   rte_socket_id(),
> -                                   0);
> +    mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", NUM_MBUFS * nb_ports,
> +    	MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id(),
> +    	NULL);
>  
>  Mbufs are the packet buffer structure used by DPDK. They are explained in
>  detail in the "Mbuf Library" section of the *DPDK Programmer's Guide*.
> diff --git a/doc/guides/sample_app_ug/quota_watermark.rst b/doc/guides/sample_app_ug/quota_watermark.rst
> index c56683a..f3a6624 100644
> --- a/doc/guides/sample_app_ug/quota_watermark.rst
> +++ b/doc/guides/sample_app_ug/quota_watermark.rst
> @@ -254,32 +254,24 @@ It contains a set of mbuf objects that are used by the driver and the applicatio
>  .. code-block:: c
>  
>      /* Create a pool of mbuf to store packets */
> -
> -    mbuf_pool = rte_mempool_create("mbuf_pool", MBUF_PER_POOL, MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private),
> -        rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);
> +    mbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", MBUF_PER_POOL, 32, 0,
> +    	MBUF_DATA_SIZE, rte_socket_id(), NULL);
>  
>      if (mbuf_pool == NULL)
>          rte_panic("%s\n", rte_strerror(rte_errno));
>  
>  The rte_mempool is a generic structure used to handle pools of objects.
> -In this case, it is necessary to create a pool that will be used by the driver,
> -which expects to have some reserved space in the mempool structure, sizeof(struct rte_pktmbuf_pool_private) bytes.
> +In this case, it is necessary to create a pool that will be used by the driver.
>  
> -The number of allocated pkt mbufs is MBUF_PER_POOL, with a size of MBUF_SIZE each.
> +The number of allocated pkt mbufs is MBUF_PER_POOL, with a data room size
> +of MBUF_DATA_SIZE each.
>  A per-lcore cache of 32 mbufs is kept.
>  The memory is allocated in on the master lcore's socket, but it is possible to extend this code to allocate one mbuf pool per socket.
>  
> -Two callback pointers are also given to the rte_mempool_create() function:
> -
> -*   The first callback pointer is to rte_pktmbuf_pool_init() and is used to initialize the private data of the mempool,
> -    which is needed by the driver.
> -    This function is provided by the mbuf API, but can be copied and extended by the developer.
> -
> -*   The second callback pointer given to rte_mempool_create() is the mbuf initializer.
> -
> -The default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library.
> -If a more complex application wants to extend the rte_pktmbuf structure for its own needs,
> -a new function derived from rte_pktmbuf_init() can be created.
> +The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf
> +initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init().
> +An advanced application may want to use the mempool API to create the
> +mbuf pool with more control.
>  
>  Ports Configuration and Pairing
>  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
> index 2f7ae70..e234c63 100644
> --- a/drivers/net/bonding/rte_eth_bond_8023ad.c
> +++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
> @@ -888,8 +888,8 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev, uint8_t slave_id)
>  	RTE_ASSERT(port->tx_ring == NULL);
>  	socket_id = rte_eth_devices[slave_id].data->numa_node;
>  
> -	element_size = sizeof(struct slow_protocol_frame) + sizeof(struct rte_mbuf)
> -				+ RTE_PKTMBUF_HEADROOM;
> +	element_size = sizeof(struct slow_protocol_frame) +
> +		RTE_PKTMBUF_HEADROOM;
>  
>  	/* The size of the mempool should be at least:
>  	 * the sum of the TX descriptors + BOND_MODE_8023AX_SLAVE_TX_PKTS */
> @@ -900,11 +900,10 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev, uint8_t slave_id)
>  	}
>  
>  	snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool", slave_id);
> -	port->mbuf_pool = rte_mempool_create(mem_name,
> -		total_tx_desc, element_size,
> -		RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ? 32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
> -		sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init,
> -		NULL, rte_pktmbuf_init, NULL, socket_id, MEMPOOL_F_NO_SPREAD);
> +	port->mbuf_pool = rte_pktmbuf_pool_create(mem_name, total_tx_desc,
> +		RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
> +			32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
> +		0, element_size, socket_id, NULL);
>  
>  	/* Any memory allocation failure in initalization is critical because
>  	 * resources can't be free, so reinitialization is impossible. */
> diff --git a/examples/ip_pipeline/init.c b/examples/ip_pipeline/init.c
> index cd167f6..d86aa86 100644
> --- a/examples/ip_pipeline/init.c
> +++ b/examples/ip_pipeline/init.c
> @@ -316,16 +316,15 @@ app_init_mempool(struct app_params *app)
>  		struct app_mempool_params *p = &app->mempool_params[i];
>  
>  		APP_LOG(app, HIGH, "Initializing %s ...", p->name);
> -		app->mempool[i] = rte_mempool_create(
> -				p->name,
> -				p->pool_size,
> -				p->buffer_size,
> -				p->cache_size,
> -				sizeof(struct rte_pktmbuf_pool_private),
> -				rte_pktmbuf_pool_init, NULL,
> -				rte_pktmbuf_init, NULL,
> -				p->cpu_socket_id,
> -				0);
> +		app->mempool[i] = rte_pktmbuf_pool_create(
> +			p->name,
> +			p->pool_size,
> +			p->cache_size,
> +			0, /* priv_size */
> +			p->buffer_size -
> +				sizeof(struct rte_mbuf), /* mbuf data size */
> +			p->cpu_socket_id,
> +			NULL);
>  
>  		if (app->mempool[i] == NULL)
>  			rte_panic("%s init error\n", p->name);
> diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
> index 50fe422..8648161 100644
> --- a/examples/ip_reassembly/main.c
> +++ b/examples/ip_reassembly/main.c
> @@ -84,9 +84,7 @@
>  
>  #define MAX_JUMBO_PKT_LEN  9600
>  
> -#define	BUF_SIZE	RTE_MBUF_DEFAULT_DATAROOM
> -#define MBUF_SIZE	\
> -	(BUF_SIZE + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
> +#define	MBUF_DATA_SIZE	RTE_MBUF_DEFAULT_BUF_SIZE
>  
>  #define NB_MBUF 8192
>  
> @@ -909,11 +907,13 @@ setup_queue_tbl(struct rx_queue *rxq, uint32_t lcore, uint32_t queue)
>  
>  	snprintf(buf, sizeof(buf), "mbuf_pool_%u_%u", lcore, queue);
>  
> -	if ((rxq->pool = rte_mempool_create(buf, nb_mbuf, MBUF_SIZE, 0,
> -			sizeof(struct rte_pktmbuf_pool_private),
> -			rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL,
> -			socket, MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) == NULL) {
> -		RTE_LOG(ERR, IP_RSMBL, "mempool_create(%s) failed", buf);
> +	rxq->pool = rte_pktmbuf_pool_create(buf, nb_mbuf,
> +		0, /* cache size */
> +		0, /* priv size */
> +		MBUF_DATA_SIZE, socket, "ring_sp_sc");
> +	if (rxq->pool == NULL) {
> +		RTE_LOG(ERR, IP_RSMBL,
> +			"rte_pktmbuf_pool_create(%s) failed", buf);
>  		return -1;
>  	}
>  
> diff --git a/examples/multi_process/l2fwd_fork/main.c b/examples/multi_process/l2fwd_fork/main.c
> index 2d951d9..358a760 100644
> --- a/examples/multi_process/l2fwd_fork/main.c
> +++ b/examples/multi_process/l2fwd_fork/main.c
> @@ -77,8 +77,7 @@
>  
>  #define RTE_LOGTYPE_L2FWD RTE_LOGTYPE_USER1
>  #define MBUF_NAME	"mbuf_pool_%d"
> -#define MBUF_SIZE	\
> -(RTE_MBUF_DEFAULT_DATAROOM + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
> +#define MBUF_DATA_SIZE	RTE_MBUF_DEFAULT_BUF_SIZE
>  #define NB_MBUF   8192
>  #define RING_MASTER_NAME	"l2fwd_ring_m2s_"
>  #define RING_SLAVE_NAME		"l2fwd_ring_s2m_"
> @@ -989,14 +988,11 @@ main(int argc, char **argv)
>  		flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET;
>  		snprintf(buf_name, RTE_MEMPOOL_NAMESIZE, MBUF_NAME, portid);
>  		l2fwd_pktmbuf_pool[portid] =
> -			rte_mempool_create(buf_name, NB_MBUF,
> -					   MBUF_SIZE, 32,
> -					   sizeof(struct rte_pktmbuf_pool_private),
> -					   rte_pktmbuf_pool_init, NULL,
> -					   rte_pktmbuf_init, NULL,
> -					   rte_socket_id(), flags);
> +			rte_pktmbuf_pool_create(buf_name, NB_MBUF, 32,
> +				0, MBUF_DATA_SIZE, rte_socket_id(),
> +				NULL);
>  		if (l2fwd_pktmbuf_pool[portid] == NULL)
> -			rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
> +			rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
>  
>  		printf("Create mbuf %s\n", buf_name);
>  	}
> diff --git a/examples/tep_termination/main.c b/examples/tep_termination/main.c
> index 622f248..2b786c5 100644
> --- a/examples/tep_termination/main.c
> +++ b/examples/tep_termination/main.c
> @@ -68,7 +68,7 @@
>  				(nb_switching_cores * MBUF_CACHE_SIZE))
>  
>  #define MBUF_CACHE_SIZE 128
> -#define MBUF_SIZE (2048 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
> +#define MBUF_DATA_SIZE RTE_MBUF_DEFAULT_BUF_SIZE
>  
>  #define MAX_PKT_BURST 32	/* Max burst size for RX/TX */
>  #define BURST_TX_DRAIN_US 100	/* TX drain every ~100us */
> @@ -1200,15 +1200,14 @@ main(int argc, char *argv[])
>  			MAX_SUP_PORTS);
>  	}
>  	/* Create the mbuf pool. */
> -	mbuf_pool = rte_mempool_create(
> +	mbuf_pool = rte_pktmbuf_pool_create(
>  			"MBUF_POOL",
> -			NUM_MBUFS_PER_PORT
> -			* valid_nb_ports,
> -			MBUF_SIZE, MBUF_CACHE_SIZE,
> -			sizeof(struct rte_pktmbuf_pool_private),
> -			rte_pktmbuf_pool_init, NULL,
> -			rte_pktmbuf_init, NULL,
> -			rte_socket_id(), 0);
> +			NUM_MBUFS_PER_PORT * valid_nb_ports,
> +			MBUF_CACHE_SIZE,
> +			0,
> +			MBUF_DATA_SIZE,
> +			rte_socket_id(),
> +			NULL);
>  	if (mbuf_pool == NULL)
>  		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
>  
> diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
> index 3e9cbb6..4b871ca 100644
> --- a/lib/librte_mbuf/rte_mbuf.c
> +++ b/lib/librte_mbuf/rte_mbuf.c
> @@ -62,7 +62,7 @@
>  
>  /*
>   * ctrlmbuf constructor, given as a callback function to
> - * rte_mempool_create()
> + * rte_mempool_obj_iter() or rte_mempool_create()
>   */
>  void
>  rte_ctrlmbuf_init(struct rte_mempool *mp,
> @@ -77,7 +77,8 @@ rte_ctrlmbuf_init(struct rte_mempool *mp,
>  
>  /*
>   * pktmbuf pool constructor, given as a callback function to
> - * rte_mempool_create()
> + * rte_mempool_create(), or called directly if using
> + * rte_mempool_create_empty()/rte_mempool_populate()
>   */
>  void
>  rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg)
> @@ -110,7 +111,7 @@ rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg)
>  
>  /*
>   * pktmbuf constructor, given as a callback function to
> - * rte_mempool_create().
> + * rte_mempool_obj_iter() or rte_mempool_create().
>   * Set the fields of a packet mbuf to their default values.
>   */
>  void
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> index 774e071..352fa02 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -44,6 +44,13 @@
>   * buffers. The message buffers are stored in a mempool, using the
>   * RTE mempool library.
>   *
> + * The preferred way to create a mbuf pool is to use
> + * rte_pktmbuf_pool_create(). However, in some situations, an
> + * application may want to have more control (ex: populate the pool with
> + * specific memory), in this case it is possible to use functions from
> + * rte_mempool. See how rte_pktmbuf_pool_create() is implemented for
> + * details.
> + *
>   * This library provide an API to allocate/free packet mbufs, which are
>   * used to carry network packets.
>   *
> @@ -1189,14 +1196,14 @@ __rte_mbuf_raw_free(struct rte_mbuf *m)
>   * This function initializes some fields in an mbuf structure that are
>   * not modified by the user once created (mbuf type, origin pool, buffer
>   * start address, and so on). This function is given as a callback function
> - * to rte_mempool_create() at pool creation time.
> + * to rte_mempool_obj_iter() or rte_mempool_create() at pool creation time.
>   *
>   * @param mp
>   *   The mempool from which the mbuf is allocated.
>   * @param opaque_arg
>   *   A pointer that can be used by the user to retrieve useful information
> - *   for mbuf initialization. This pointer comes from the ``init_arg``
> - *   parameter of rte_mempool_create().
> + *   for mbuf initialization. This pointer is the opaque argument passed to
> + *   rte_mempool_obj_iter() or rte_mempool_create().
>   * @param m
>   *   The mbuf to initialize.
>   * @param i
> @@ -1270,14 +1277,14 @@ rte_is_ctrlmbuf(struct rte_mbuf *m)
>   * This function initializes some fields in the mbuf structure that are
>   * not modified by the user once created (origin pool, buffer start
>   * address, and so on). This function is given as a callback function to
> - * rte_mempool_create() at pool creation time.
> + * rte_mempool_obj_iter() or rte_mempool_create() at pool creation time.
>   *
>   * @param mp
>   *   The mempool from which mbufs originate.
>   * @param opaque_arg
>   *   A pointer that can be used by the user to retrieve useful information
> - *   for mbuf initialization. This pointer comes from the ``init_arg``
> - *   parameter of rte_mempool_create().
> + *   for mbuf initialization. This pointer is the opaque argument passed to
> + *   rte_mempool_obj_iter() or rte_mempool_create().
>   * @param m
>   *   The mbuf to initialize.
>   * @param i
> @@ -1292,7 +1299,8 @@ void rte_pktmbuf_init(struct rte_mempool *mp, void *opaque_arg,
>   *
>   * This function initializes the mempool private data in the case of a
>   * pktmbuf pool. This private data is needed by the driver. The
> - * function is given as a callback function to rte_mempool_create() at
> + * function must be called on the mempool before it is used, or it
> + * can be given as a callback function to rte_mempool_create() at
>   * pool creation. It can be extended by the user, for example, to
>   * provide another packet size.
>   *
> @@ -1300,8 +1308,8 @@ void rte_pktmbuf_init(struct rte_mempool *mp, void *opaque_arg,
>   *   The mempool from which mbufs originate.
>   * @param opaque_arg
>   *   A pointer that can be used by the user to retrieve useful information
> - *   for mbuf initialization. This pointer comes from the ``init_arg``
> - *   parameter of rte_mempool_create().
> + *   for mbuf initialization. This pointer is the opaque argument passed to
> + *   rte_mempool_create().
>   */
>  void rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg);
>  
> @@ -1309,8 +1317,7 @@ void rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg);
>   * Create a mbuf pool.
>   *
>   * This function creates and initializes a packet mbuf pool. It is
> - * a wrapper to rte_mempool_create() with the proper packet constructor
> - * and mempool constructor.
> + * a wrapper to rte_mempool functions.
>   *
>   * @param name
>   *   The name of the mbuf pool.
> -- 
> 2.8.1
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC 2/7] mbuf: use helper to create the pool
  2017-01-16 15:30   ` Santosh Shukla
@ 2017-01-31 10:31     ` Olivier Matz
  0 siblings, 0 replies; 15+ messages in thread
From: Olivier Matz @ 2017-01-31 10:31 UTC (permalink / raw)
  To: Santosh Shukla; +Cc: dev, jerin.jacob, hemant.agrawal, david.hunt

Hi Santosh,

On Mon, 16 Jan 2017 21:00:37 +0530, Santosh Shukla
<santosh.shukla@caviumnetworks.com> wrote:
> Hi Olivier,
> 
> 
> On Mon, Sep 19, 2016 at 03:42:42PM +0200, Olivier Matz wrote:
> > When possible, replace the uses of rte_mempool_create() with
> > the helper provided in librte_mbuf: rte_pktmbuf_pool_create().
> > 
> > This is the preferred way to create a mbuf pool.
> > 
> > By the way, akso update the documentation.
> >  
> 
> I am working on ext-mempool pmd driver for cvm soc.
> So interested in this thread.
> 
> Wondering why this thread not followed up, is it
> because we don't want to deprecate rte_mempool_create()?
> Or if we want to then in which release you are targeting.

It seems that the RFC patchset was not the proper way to fix the issue.
On the other hand, this particular patch should be integrated, as
highlighted by Hemant too. Thanks for reminding it.

> Beside that some high level comment -
> - Your changeset missing mempool test application i.e..
> test_mempool.c/ test_mempool_perf.c; Do you plan to accomodate them?

As answered in the other thread, I think there is nothing to change
in test_mempool*.c, since this patch is just about mbuf pools.


> - ext-mempool does not necessarily need MBUF_CACHE_SIZE. Let HW-mngr
> to directly handover buffer to application; Rather caching same
> buffer per core way. It will save some cycles. What do you think?

It's still possible to set the cache size to 0. In that case, it will
directly call rte_mempool_ops_dequeue_bulk(). But, given the function
call to ops->dequeue() and the few tests, it is probably faster to use
the cache, even with a fast hw allocation.


> - I figured out that ext-mempool API not mapping well on cvm hw; For
> few reason:
>   Lets say application calls:
>   rte_pktmbuf_pool_create()
>    --> rte_mempool_create_empty()
>    --> rte_mempool_ops_byname()
>    --> rte_mempool_populate_default()  
>          -->> rte_mempool_ops_alloc()  
>                   --> ext-mempool-specific-pool-create handle  
> [...]

I'm answering in the other thread.

Thanks,
Olivier

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2017-01-31 10:32 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-19 13:42 [RFC 0/7] changing mbuf pool handler Olivier Matz
2016-09-19 13:42 ` [RFC 1/7] mbuf: set the handler at mbuf pool creation Olivier Matz
2016-09-19 13:42 ` [RFC 2/7] mbuf: use helper to create the pool Olivier Matz
2017-01-16 15:30   ` Santosh Shukla
2017-01-31 10:31     ` Olivier Matz
2016-09-19 13:42 ` [RFC 3/7] testpmd: new parameter to set mbuf pool ops Olivier Matz
2016-09-19 13:42 ` [RFC 4/7] l3fwd: rework long options parsing Olivier Matz
2016-09-19 13:42 ` [RFC 5/7] l3fwd: new parameter to set mbuf pool ops Olivier Matz
2016-09-19 13:42 ` [RFC 6/7] l2fwd: rework long options parsing Olivier Matz
2016-09-19 13:42 ` [RFC 7/7] l2fwd: new parameter to set mbuf pool ops Olivier Matz
2016-09-22 11:52 ` [RFC 0/7] changing mbuf pool handler Hemant Agrawal
2016-10-03 15:49   ` Olivier Matz
2016-10-05  9:41     ` Hunt, David
2016-10-05 11:49       ` Hemant Agrawal
2016-10-05 13:15         ` Hunt, David

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.