From mboxrd@z Thu Jan 1 00:00:00 1970 From: Hemant Agrawal Subject: Re: [PATCH] mbuf: use pktmbuf helper to create the pool Date: Thu, 19 Jan 2017 13:27:40 +0000 Message-ID: References: <1484679174-4174-1-git-send-email-hemant.agrawal@nxp.com> <1484832240-2048-1-git-send-email-hemant.agrawal@nxp.com> <1484832240-2048-3-git-send-email-hemant.agrawal@nxp.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Cc: "thomas.monjalon@6wind.com" , "bruce.richardson@intel.com" , Shreyansh Jain , "john.mcnamara@intel.com" , "ferruh.yigit@intel.com" , "jerin.jacob@caviumnetworks.com" , "Olivier Matz" To: Hemant Agrawal , "dev@dpdk.org" Return-path: Received: from EUR03-DB5-obe.outbound.protection.outlook.com (mail-eopbgr40061.outbound.protection.outlook.com [40.107.4.61]) by dpdk.org (Postfix) with ESMTP id DED37F97C for ; Thu, 19 Jan 2017 14:27:42 +0100 (CET) In-Reply-To: <1484832240-2048-3-git-send-email-hemant.agrawal@nxp.com> Content-Language: en-US List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Please ignore. Apologies for repeated sent. This patch was posted earlier. - Hemant > -----Original Message----- > From: Hemant Agrawal [mailto:hemant.agrawal@nxp.com] > Sent: Thursday, January 19, 2017 6:53 PM > To: dev@dpdk.org > Cc: thomas.monjalon@6wind.com; bruce.richardson@intel.com; Shreyansh Jain > ; john.mcnamara@intel.com; > ferruh.yigit@intel.com; jerin.jacob@caviumnetworks.com; Olivier Matz > ; Hemant Agrawal > Subject: [PATCH] mbuf: use pktmbuf helper to create the pool >=20 > When possible, replace the uses of rte_mempool_create() with the helper > provided in librte_mbuf: rte_pktmbuf_pool_create(). >=20 > This is the preferred way to create a mbuf pool. >=20 > This also updates the documentation. >=20 > Signed-off-by: Olivier Matz > Signed-off-by: Hemant Agrawal > --- > This patch is derived from the RFC from Olivier: > http://dpdk.org/dev/patchwork/patch/15925/ >=20 > app/test/test_link_bonding_rssconf.c | 11 ++++---- > doc/guides/sample_app_ug/ip_reassembly.rst | 13 +++++---- > doc/guides/sample_app_ug/ipv4_multicast.rst | 12 ++++---- > doc/guides/sample_app_ug/l2_forward_job_stats.rst | 33 ++++++++--------= ----- > - > .../sample_app_ug/l2_forward_real_virtual.rst | 26 +++++++---------= - > doc/guides/sample_app_ug/ptpclient.rst | 11 ++------ > doc/guides/sample_app_ug/quota_watermark.rst | 26 ++++++----------= - > drivers/net/bonding/rte_eth_bond_8023ad.c | 13 ++++----- > examples/ip_pipeline/init.c | 18 ++++++------ > examples/ip_reassembly/main.c | 16 +++++------ > examples/multi_process/l2fwd_fork/main.c | 13 +++------ > examples/tep_termination/main.c | 16 +++++------ > lib/librte_mbuf/rte_mbuf.c | 7 +++-- > lib/librte_mbuf/rte_mbuf.h | 29 +++++++++++-----= --- > 14 files changed, 106 insertions(+), 138 deletions(-) >=20 > diff --git a/app/test/test_link_bonding_rssconf.c > b/app/test/test_link_bonding_rssconf.c > index 34f1c16..9034f62 100644 > --- a/app/test/test_link_bonding_rssconf.c > +++ b/app/test/test_link_bonding_rssconf.c > @@ -67,7 +67,7 @@ > #define SLAVE_RXTX_QUEUE_FMT ("rssconf_slave%d_q%d") >=20 > #define NUM_MBUFS 8191 > -#define MBUF_SIZE (1600 + sizeof(struct rte_mbuf) + > RTE_PKTMBUF_HEADROOM) > +#define MBUF_SIZE (1600 + RTE_PKTMBUF_HEADROOM) > #define MBUF_CACHE_SIZE 250 > #define BURST_SIZE 32 >=20 > @@ -536,13 +536,12 @@ struct link_bonding_rssconf_unittest_params { >=20 > if (test_params.mbuf_pool =3D=3D NULL) { >=20 > - test_params.mbuf_pool =3D > rte_mempool_create("RSS_MBUF_POOL", NUM_MBUFS * > - SLAVE_COUNT, MBUF_SIZE, > MBUF_CACHE_SIZE, > - sizeof(struct rte_pktmbuf_pool_private), > rte_pktmbuf_pool_init, > - NULL, rte_pktmbuf_init, NULL, rte_socket_id(), > 0); > + test_params.mbuf_pool =3D rte_pktmbuf_pool_create( > + "RSS_MBUF_POOL", NUM_MBUFS * SLAVE_COUNT, > + MBUF_CACHE_SIZE, 0, MBUF_SIZE, rte_socket_id()); >=20 > TEST_ASSERT(test_params.mbuf_pool !=3D NULL, > - "rte_mempool_create failed\n"); > + "rte_pktmbuf_pool_create failed\n"); > } >=20 > /* Create / initialize ring eth devs. */ diff --git > a/doc/guides/sample_app_ug/ip_reassembly.rst > b/doc/guides/sample_app_ug/ip_reassembly.rst > index 3c5cc70..d5097c6 100644 > --- a/doc/guides/sample_app_ug/ip_reassembly.rst > +++ b/doc/guides/sample_app_ug/ip_reassembly.rst > @@ -223,11 +223,14 @@ each RX queue uses its own mempool. >=20 > snprintf(buf, sizeof(buf), "mbuf_pool_%u_%u", lcore, queue); >=20 > - if ((rxq->pool =3D rte_mempool_create(buf, nb_mbuf, MBUF_SIZE, 0, > sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init, NULL, > - rte_pktmbuf_init, NULL, socket, MEMPOOL_F_SP_PUT | > MEMPOOL_F_SC_GET)) =3D=3D NULL) { > - > - RTE_LOG(ERR, IP_RSMBL, "mempool_create(%s) failed", buf); > - return -1; > + rxq->pool =3D rte_pktmbuf_pool_create(buf, nb_mbuf, > + 0, /* cache size */ > + 0, /* priv size */ > + MBUF_DATA_SIZE, socket); > + if (rxq->pool =3D=3D NULL) { > + RTE_LOG(ERR, IP_RSMBL, > + "rte_pktmbuf_pool_create(%s) failed", buf); > + return -1; > } >=20 > Packet Reassembly and Forwarding > diff --git a/doc/guides/sample_app_ug/ipv4_multicast.rst > b/doc/guides/sample_app_ug/ipv4_multicast.rst > index 72da8c4..d9ff249 100644 > --- a/doc/guides/sample_app_ug/ipv4_multicast.rst > +++ b/doc/guides/sample_app_ug/ipv4_multicast.rst > @@ -145,12 +145,12 @@ Memory pools for indirect buffers are initialized > differently from the memory po >=20 > .. code-block:: c >=20 > - packet_pool =3D rte_mempool_create("packet_pool", NB_PKT_MBUF, > PKT_MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private), > - rte_pktmbuf_pool_init, NULL, rte_pk= tmbuf_init, NULL, > rte_socket_id(), 0); > - > - header_pool =3D rte_mempool_create("header_pool", NB_HDR_MBUF, > HDR_MBUF_SIZE, 32, 0, NULL, NULL, rte_pktmbuf_init, NULL, rte_socket_id()= , > 0); > - clone_pool =3D rte_mempool_create("clone_pool", NB_CLONE_MBUF, > - CLONE_MBUF_SIZE, 32, 0, NULL, NULL, rte_pktmbuf_init, NULL, > rte_socket_id(), 0); > + packet_pool =3D rte_pktmbuf_pool_create("packet_pool", NB_PKT_MBUF, = 32, > + 0, PKT_MBUF_DATA_SIZE, rte_socket_id()); > + header_pool =3D rte_pktmbuf_pool_create("header_pool", NB_HDR_MBUF, > 32, > + 0, HDR_MBUF_DATA_SIZE, rte_socket_id()); > + clone_pool =3D rte_pktmbuf_pool_create("clone_pool", NB_CLONE_MBUF, > 32, > + 0, 0, rte_socket_id()); >=20 > The reason for this is because indirect buffers are not supposed to hold= any > packet data and therefore can be initialized with lower amount of reserv= ed > memory for each buffer. > diff --git a/doc/guides/sample_app_ug/l2_forward_job_stats.rst > b/doc/guides/sample_app_ug/l2_forward_job_stats.rst > index 2444e36..a606b86 100644 > --- a/doc/guides/sample_app_ug/l2_forward_job_stats.rst > +++ b/doc/guides/sample_app_ug/l2_forward_job_stats.rst > @@ -193,36 +193,25 @@ and the application to store network packet data: > .. code-block:: c >=20 > /* create the mbuf pool */ > - l2fwd_pktmbuf_pool =3D > - rte_mempool_create("mbuf_pool", NB_MBUF, > - MBUF_SIZE, 32, > - sizeof(struct rte_pktmbuf_pool_private), > - rte_pktmbuf_pool_init, NULL, > - rte_pktmbuf_init, NULL, > - rte_socket_id(), 0); > + l2fwd_pktmbuf_pool =3D rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, > + MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, > + rte_socket_id()); >=20 > if (l2fwd_pktmbuf_pool =3D=3D NULL) > rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n"); >=20 > The rte_mempool is a generic structure used to handle pools of objects. > -In this case, it is necessary to create a pool that will be used by the = driver, - > which expects to have some reserved space in the mempool structure, - > sizeof(struct rte_pktmbuf_pool_private) bytes. > -The number of allocated pkt mbufs is NB_MBUF, with a size of MBUF_SIZE > each. > -A per-lcore cache of 32 mbufs is kept. > +In this case, it is necessary to create a pool that will be used by the = driver. > +The number of allocated pkt mbufs is NB_MBUF, with a data room size of > +RTE_MBUF_DEFAULT_BUF_SIZE each. > +A per-lcore cache of MEMPOOL_CACHE_SIZE mbufs is kept. > The memory is allocated in rte_socket_id() socket, but it is possible t= o extend > this code to allocate one mbuf pool per socket. >=20 > -Two callback pointers are also given to the rte_mempool_create() functio= n: > - > -* The first callback pointer is to rte_pktmbuf_pool_init() and is used > - to initialize the private data of the mempool, which is needed by th= e driver. > - This function is provided by the mbuf API, but can be copied and ext= ended by > the developer. > - > -* The second callback pointer given to rte_mempool_create() is the mbu= f > initializer. > - The default is used, that is, rte_pktmbuf_init(), which is provided = in the > rte_mbuf library. > - If a more complex application wants to extend the rte_pktmbuf struct= ure for > its own needs, > - a new function derived from rte_pktmbuf_init( ) can be created. > +The rte_pktmbuf_pool_create() function uses the default mbuf pool and > +mbuf initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_= init(). > +An advanced application may want to use the mempool API to create the > +mbuf pool with more control. >=20 > Driver Initialization > ~~~~~~~~~~~~~~~~~~~~~ > diff --git a/doc/guides/sample_app_ug/l2_forward_real_virtual.rst > b/doc/guides/sample_app_ug/l2_forward_real_virtual.rst > index cf15d1c..de86ac8 100644 > --- a/doc/guides/sample_app_ug/l2_forward_real_virtual.rst > +++ b/doc/guides/sample_app_ug/l2_forward_real_virtual.rst > @@ -207,31 +207,25 @@ and the application to store network packet data: >=20 > /* create the mbuf pool */ >=20 > - l2fwd_pktmbuf_pool =3D rte_mempool_create("mbuf_pool", NB_MBUF, > MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private), > - rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, SOCKET0, 0)= ; > + l2fwd_pktmbuf_pool =3D rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, > + MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, > + rte_socket_id()); >=20 > if (l2fwd_pktmbuf_pool =3D=3D NULL) > rte_panic("Cannot init mbuf pool\n"); >=20 > The rte_mempool is a generic structure used to handle pools of objects. > -In this case, it is necessary to create a pool that will be used by the = driver, - > which expects to have some reserved space in the mempool structure, - > sizeof(struct rte_pktmbuf_pool_private) bytes. > -The number of allocated pkt mbufs is NB_MBUF, with a size of MBUF_SIZE > each. > +In this case, it is necessary to create a pool that will be used by the = driver. > +The number of allocated pkt mbufs is NB_MBUF, with a data room size of > +RTE_MBUF_DEFAULT_BUF_SIZE each. > A per-lcore cache of 32 mbufs is kept. > The memory is allocated in NUMA socket 0, but it is possible to extend = this > code to allocate one mbuf pool per socket. >=20 > -Two callback pointers are also given to the rte_mempool_create() functio= n: > - > -* The first callback pointer is to rte_pktmbuf_pool_init() and is used > - to initialize the private data of the mempool, which is needed by th= e driver. > - This function is provided by the mbuf API, but can be copied and ext= ended by > the developer. > - > -* The second callback pointer given to rte_mempool_create() is the mbu= f > initializer. > - The default is used, that is, rte_pktmbuf_init(), which is provided = in the > rte_mbuf library. > - If a more complex application wants to extend the rte_pktmbuf struct= ure for > its own needs, > - a new function derived from rte_pktmbuf_init( ) can be created. > +The rte_pktmbuf_pool_create() function uses the default mbuf pool and > +mbuf initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_= init(). > +An advanced application may want to use the mempool API to create the > +mbuf pool with more control. >=20 > .. _l2_fwd_app_dvr_init: >=20 > diff --git a/doc/guides/sample_app_ug/ptpclient.rst > b/doc/guides/sample_app_ug/ptpclient.rst > index 6e425b7..405a267 100644 > --- a/doc/guides/sample_app_ug/ptpclient.rst > +++ b/doc/guides/sample_app_ug/ptpclient.rst > @@ -171,15 +171,8 @@ used by the application: >=20 > .. code-block:: c >=20 > - mbuf_pool =3D rte_mempool_create("MBUF_POOL", > - NUM_MBUFS * nb_ports, > - MBUF_SIZE, > - MBUF_CACHE_SIZE, > - sizeof(struct rte_pktmbuf_pool_privat= e), > - rte_pktmbuf_pool_init, NULL, > - rte_pktmbuf_init, NULL, > - rte_socket_id(), > - 0); > + mbuf_pool =3D rte_pktmbuf_pool_create("MBUF_POOL", NUM_MBUFS * > nb_ports, > + MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, > rte_socket_id()); >=20 > Mbufs are the packet buffer structure used by DPDK. They are explained i= n > detail in the "Mbuf Library" section of the *DPDK Programmer's Guide*. > diff --git a/doc/guides/sample_app_ug/quota_watermark.rst > b/doc/guides/sample_app_ug/quota_watermark.rst > index c56683a..a0da8fe 100644 > --- a/doc/guides/sample_app_ug/quota_watermark.rst > +++ b/doc/guides/sample_app_ug/quota_watermark.rst > @@ -254,32 +254,24 @@ It contains a set of mbuf objects that are used by = the > driver and the applicatio .. code-block:: c >=20 > /* Create a pool of mbuf to store packets */ > - > - mbuf_pool =3D rte_mempool_create("mbuf_pool", MBUF_PER_POOL, > MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private), > - rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, rte_socket_= id(), 0); > + mbuf_pool =3D rte_pktmbuf_pool_create("mbuf_pool", MBUF_PER_POOL, 32= , > 0, > + MBUF_DATA_SIZE, rte_socket_id()); >=20 > if (mbuf_pool =3D=3D NULL) > rte_panic("%s\n", rte_strerror(rte_errno)); >=20 > The rte_mempool is a generic structure used to handle pools of objects. > -In this case, it is necessary to create a pool that will be used by the = driver, - > which expects to have some reserved space in the mempool structure, > sizeof(struct rte_pktmbuf_pool_private) bytes. > +In this case, it is necessary to create a pool that will be used by the = driver. >=20 > -The number of allocated pkt mbufs is MBUF_PER_POOL, with a size of > MBUF_SIZE each. > +The number of allocated pkt mbufs is MBUF_PER_POOL, with a data room > +size of MBUF_DATA_SIZE each. > A per-lcore cache of 32 mbufs is kept. > The memory is allocated in on the master lcore's socket, but it is possi= ble to > extend this code to allocate one mbuf pool per socket. >=20 > -Two callback pointers are also given to the rte_mempool_create() functio= n: > - > -* The first callback pointer is to rte_pktmbuf_pool_init() and is used= to > initialize the private data of the mempool, > - which is needed by the driver. > - This function is provided by the mbuf API, but can be copied and ext= ended by > the developer. > - > -* The second callback pointer given to rte_mempool_create() is the mbu= f > initializer. > - > -The default is used, that is, rte_pktmbuf_init(), which is provided in t= he > rte_mbuf library. > -If a more complex application wants to extend the rte_pktmbuf structure = for its > own needs, -a new function derived from rte_pktmbuf_init() can be created= . > +The rte_pktmbuf_pool_create() function uses the default mbuf pool and > +mbuf initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_= init(). > +An advanced application may want to use the mempool API to create the > +mbuf pool with more control. >=20 > Ports Configuration and Pairing > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c > b/drivers/net/bonding/rte_eth_bond_8023ad.c > index 2f7ae70..af211ca 100644 > --- a/drivers/net/bonding/rte_eth_bond_8023ad.c > +++ b/drivers/net/bonding/rte_eth_bond_8023ad.c > @@ -888,8 +888,8 @@ > RTE_ASSERT(port->tx_ring =3D=3D NULL); > socket_id =3D rte_eth_devices[slave_id].data->numa_node; >=20 > - element_size =3D sizeof(struct slow_protocol_frame) + sizeof(struct > rte_mbuf) > - + RTE_PKTMBUF_HEADROOM; > + element_size =3D sizeof(struct slow_protocol_frame) + > + RTE_PKTMBUF_HEADROOM; >=20 > /* The size of the mempool should be at least: > * the sum of the TX descriptors + > BOND_MODE_8023AX_SLAVE_TX_PKTS */ @@ -900,11 +900,10 @@ > } >=20 > snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool", > slave_id); > - port->mbuf_pool =3D rte_mempool_create(mem_name, > - total_tx_desc, element_size, > - RTE_MEMPOOL_CACHE_MAX_SIZE >=3D 32 ? 32 : > RTE_MEMPOOL_CACHE_MAX_SIZE, > - sizeof(struct rte_pktmbuf_pool_private), > rte_pktmbuf_pool_init, > - NULL, rte_pktmbuf_init, NULL, socket_id, > MEMPOOL_F_NO_SPREAD); > + port->mbuf_pool =3D rte_pktmbuf_pool_create(mem_name, > total_tx_desc, > + RTE_MEMPOOL_CACHE_MAX_SIZE >=3D 32 ? > + 32 : RTE_MEMPOOL_CACHE_MAX_SIZE, > + 0, element_size, socket_id); >=20 > /* Any memory allocation failure in initalization is critical because > * resources can't be free, so reinitialization is impossible. */ diff = --git > a/examples/ip_pipeline/init.c b/examples/ip_pipeline/init.c index > 3b36b53..d55c3b4 100644 > --- a/examples/ip_pipeline/init.c > +++ b/examples/ip_pipeline/init.c > @@ -324,16 +324,14 @@ > struct app_mempool_params *p =3D &app->mempool_params[i]; >=20 > APP_LOG(app, HIGH, "Initializing %s ...", p->name); > - app->mempool[i] =3D rte_mempool_create( > - p->name, > - p->pool_size, > - p->buffer_size, > - p->cache_size, > - sizeof(struct rte_pktmbuf_pool_private), > - rte_pktmbuf_pool_init, NULL, > - rte_pktmbuf_init, NULL, > - p->cpu_socket_id, > - 0); > + app->mempool[i] =3D rte_pktmbuf_pool_create( > + p->name, > + p->pool_size, > + p->cache_size, > + 0, /* priv_size */ > + p->buffer_size - > + sizeof(struct rte_mbuf), /* mbuf data size */ > + p->cpu_socket_id); >=20 > if (app->mempool[i] =3D=3D NULL) > rte_panic("%s init error\n", p->name); diff --git > a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c index > 50fe422..f6378bf 100644 > --- a/examples/ip_reassembly/main.c > +++ b/examples/ip_reassembly/main.c > @@ -84,9 +84,7 @@ >=20 > #define MAX_JUMBO_PKT_LEN 9600 >=20 > -#define BUF_SIZE RTE_MBUF_DEFAULT_DATAROOM > -#define MBUF_SIZE \ > - (BUF_SIZE + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM) > +#define MBUF_DATA_SIZE RTE_MBUF_DEFAULT_BUF_SIZE >=20 > #define NB_MBUF 8192 >=20 > @@ -909,11 +907,13 @@ struct rte_lpm6_config lpm6_config =3D { >=20 > snprintf(buf, sizeof(buf), "mbuf_pool_%u_%u", lcore, queue); >=20 > - if ((rxq->pool =3D rte_mempool_create(buf, nb_mbuf, MBUF_SIZE, 0, > - sizeof(struct rte_pktmbuf_pool_private), > - rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, > - socket, MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) > =3D=3D NULL) { > - RTE_LOG(ERR, IP_RSMBL, "mempool_create(%s) failed", buf); > + rxq->pool =3D rte_pktmbuf_pool_create(buf, nb_mbuf, > + 0, /* cache size */ > + 0, /* priv size */ > + MBUF_DATA_SIZE, socket); > + if (rxq->pool =3D=3D NULL) { > + RTE_LOG(ERR, IP_RSMBL, > + "rte_pktmbuf_pool_create(%s) failed", buf); > return -1; > } >=20 > diff --git a/examples/multi_process/l2fwd_fork/main.c > b/examples/multi_process/l2fwd_fork/main.c > index 2d951d9..b34916e 100644 > --- a/examples/multi_process/l2fwd_fork/main.c > +++ b/examples/multi_process/l2fwd_fork/main.c > @@ -77,8 +77,7 @@ >=20 > #define RTE_LOGTYPE_L2FWD RTE_LOGTYPE_USER1 > #define MBUF_NAME "mbuf_pool_%d" > -#define MBUF_SIZE \ > -(RTE_MBUF_DEFAULT_DATAROOM + sizeof(struct rte_mbuf) + > RTE_PKTMBUF_HEADROOM) > +#define MBUF_DATA_SIZE RTE_MBUF_DEFAULT_BUF_SIZE > #define NB_MBUF 8192 > #define RING_MASTER_NAME "l2fwd_ring_m2s_" > #define RING_SLAVE_NAME "l2fwd_ring_s2m_" > @@ -989,14 +988,10 @@ struct l2fwd_port_statistics { > flags =3D MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET; > snprintf(buf_name, RTE_MEMPOOL_NAMESIZE, MBUF_NAME, > portid); > l2fwd_pktmbuf_pool[portid] =3D > - rte_mempool_create(buf_name, NB_MBUF, > - MBUF_SIZE, 32, > - sizeof(struct > rte_pktmbuf_pool_private), > - rte_pktmbuf_pool_init, NULL, > - rte_pktmbuf_init, NULL, > - rte_socket_id(), flags); > + rte_pktmbuf_pool_create(buf_name, NB_MBUF, 32, > + 0, MBUF_DATA_SIZE, rte_socket_id()); > if (l2fwd_pktmbuf_pool[portid] =3D=3D NULL) > - rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n"); > + rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n"); >=20 > printf("Create mbuf %s\n", buf_name); > } > diff --git a/examples/tep_termination/main.c > b/examples/tep_termination/main.c index bd1dc96..20dafdb 100644 > --- a/examples/tep_termination/main.c > +++ b/examples/tep_termination/main.c > @@ -68,7 +68,7 @@ > (nb_switching_cores * MBUF_CACHE_SIZE)) >=20 > #define MBUF_CACHE_SIZE 128 > -#define MBUF_SIZE (2048 + sizeof(struct rte_mbuf) + > RTE_PKTMBUF_HEADROOM) > +#define MBUF_DATA_SIZE RTE_MBUF_DEFAULT_BUF_SIZE >=20 > #define MAX_PKT_BURST 32 /* Max burst size for RX/TX */ > #define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */ > @@ -1199,15 +1199,13 @@ static inline void __attribute__((always_inline)) > MAX_SUP_PORTS); > } > /* Create the mbuf pool. */ > - mbuf_pool =3D rte_mempool_create( > + mbuf_pool =3D rte_pktmbuf_pool_create( > "MBUF_POOL", > - NUM_MBUFS_PER_PORT > - * valid_nb_ports, > - MBUF_SIZE, MBUF_CACHE_SIZE, > - sizeof(struct rte_pktmbuf_pool_private), > - rte_pktmbuf_pool_init, NULL, > - rte_pktmbuf_init, NULL, > - rte_socket_id(), 0); > + NUM_MBUFS_PER_PORT * valid_nb_ports, > + MBUF_CACHE_SIZE, > + 0, > + MBUF_DATA_SIZE, > + rte_socket_id()); > if (mbuf_pool =3D=3D NULL) > rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n"); >=20 > diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c inde= x > 72ad91e..3fb2700 100644 > --- a/lib/librte_mbuf/rte_mbuf.c > +++ b/lib/librte_mbuf/rte_mbuf.c > @@ -62,7 +62,7 @@ >=20 > /* > * ctrlmbuf constructor, given as a callback function to > - * rte_mempool_create() > + * rte_mempool_obj_iter() or rte_mempool_create() > */ > void > rte_ctrlmbuf_init(struct rte_mempool *mp, @@ -77,7 +77,8 @@ >=20 > /* > * pktmbuf pool constructor, given as a callback function to > - * rte_mempool_create() > + * rte_mempool_create(), or called directly if using > + * rte_mempool_create_empty()/rte_mempool_populate() > */ > void > rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg) @@ - > 110,7 +111,7 @@ >=20 > /* > * pktmbuf constructor, given as a callback function to > - * rte_mempool_create(). > + * rte_mempool_obj_iter() or rte_mempool_create(). > * Set the fields of a packet mbuf to their default values. > */ > void > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h inde= x > bfce9f4..b1d4ccb 100644 > --- a/lib/librte_mbuf/rte_mbuf.h > +++ b/lib/librte_mbuf/rte_mbuf.h > @@ -44,6 +44,13 @@ > * buffers. The message buffers are stored in a mempool, using the > * RTE mempool library. > * > + * The preferred way to create a mbuf pool is to use > + * rte_pktmbuf_pool_create(). However, in some situations, an > + * application may want to have more control (ex: populate the pool > + with > + * specific memory), in this case it is possible to use functions from > + * rte_mempool. See how rte_pktmbuf_pool_create() is implemented for > + * details. > + * > * This library provides an API to allocate/free packet mbufs, which are > * used to carry network packets. > * > @@ -810,14 +817,14 @@ static inline void __attribute__((always_inline)) > * This function initializes some fields in an mbuf structure that are > * not modified by the user once created (mbuf type, origin pool, buffer > * start address, and so on). This function is given as a callback funct= ion > - * to rte_mempool_create() at pool creation time. > + * to rte_mempool_obj_iter() or rte_mempool_create() at pool creation ti= me. > * > * @param mp > * The mempool from which the mbuf is allocated. > * @param opaque_arg > * A pointer that can be used by the user to retrieve useful informati= on > - * for mbuf initialization. This pointer comes from the ``init_arg`` > - * parameter of rte_mempool_create(). > + * for mbuf initialization. This pointer is the opaque argument passed= to > + * rte_mempool_obj_iter() or rte_mempool_create(). > * @param m > * The mbuf to initialize. > * @param i > @@ -891,14 +898,14 @@ void rte_ctrlmbuf_init(struct rte_mempool *mp, void > *opaque_arg, > * This function initializes some fields in the mbuf structure that are > * not modified by the user once created (origin pool, buffer start > * address, and so on). This function is given as a callback function to > - * rte_mempool_create() at pool creation time. > + * rte_mempool_obj_iter() or rte_mempool_create() at pool creation time. > * > * @param mp > * The mempool from which mbufs originate. > * @param opaque_arg > * A pointer that can be used by the user to retrieve useful informati= on > - * for mbuf initialization. This pointer comes from the ``init_arg`` > - * parameter of rte_mempool_create(). > + * for mbuf initialization. This pointer is the opaque argument passed= to > + * rte_mempool_obj_iter() or rte_mempool_create(). > * @param m > * The mbuf to initialize. > * @param i > @@ -913,7 +920,8 @@ void rte_pktmbuf_init(struct rte_mempool *mp, void > *opaque_arg, > * > * This function initializes the mempool private data in the case of a > * pktmbuf pool. This private data is needed by the driver. The > - * function is given as a callback function to rte_mempool_create() at > + * function must be called on the mempool before it is used, or it > + * can be given as a callback function to rte_mempool_create() at > * pool creation. It can be extended by the user, for example, to > * provide another packet size. > * > @@ -921,8 +929,8 @@ void rte_pktmbuf_init(struct rte_mempool *mp, void > *opaque_arg, > * The mempool from which mbufs originate. > * @param opaque_arg > * A pointer that can be used by the user to retrieve useful informati= on > - * for mbuf initialization. This pointer comes from the ``init_arg`` > - * parameter of rte_mempool_create(). > + * for mbuf initialization. This pointer is the opaque argument passed= to > + * rte_mempool_create(). > */ > void rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg); >=20 > @@ -930,8 +938,7 @@ void rte_pktmbuf_init(struct rte_mempool *mp, void > *opaque_arg, > * Create a mbuf pool. > * > * This function creates and initializes a packet mbuf pool. It is > - * a wrapper to rte_mempool_create() with the proper packet constructor > - * and mempool constructor. > + * a wrapper to rte_mempool functions. > * > * @param name > * The name of the mbuf pool. > -- > 1.9.1