All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 1/1] app/testpmd: add valid check to verify multi mempool feature
@ 2022-11-17 11:30 Hanumanth Pothula
  2022-11-17 12:55 ` [PATCH v2 " Hanumanth Pothula
  0 siblings, 1 reply; 27+ messages in thread
From: Hanumanth Pothula @ 2022-11-17 11:30 UTC (permalink / raw)
  To: Aman Singh, Yuying Zhang
  Cc: dev, andrew.rybchenko, thomas, yux.jiang, jerinj, ndabilpuram,
	gakhil, hpothula

Validate ethdev parameter 'max_rx_mempools' to know wheater device
supports multi-mempool feature or not.

Bugzilla ID: 1128

Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
---
 app/test-pmd/testpmd.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 78ea19fcbb..79c0951b62 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -2648,16 +2648,22 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 {
 	union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
 	struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
+	struct rte_eth_dev_info dev_info;
 	struct rte_mempool *mpx;
 	unsigned int i, mp_n;
 	int ret;
 
+	ret = rte_eth_dev_info_get(port_id, &dev_info);
+	if (ret != 0)
+		return ret;
+
 	/* Verify Rx queue configuration is single pool and segment or
 	 * multiple pool/segment.
+	 * @see rte_eth_dev_info::max_rx_mempools
 	 * @see rte_eth_rxconf::rx_mempools
 	 * @see rte_eth_rxconf::rx_seg
 	 */
-	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
+	if (!(dev_info.max_rx_mempools != 0) && !(rx_pkt_nb_segs > 1 ||
 	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
 		/* Single pool/segment configuration */
 		rx_conf->rx_seg = NULL;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-17 11:30 [PATCH v1 1/1] app/testpmd: add valid check to verify multi mempool feature Hanumanth Pothula
@ 2022-11-17 12:55 ` Hanumanth Pothula
  2022-11-17 15:00   ` Singh, Aman Deep
                     ` (2 more replies)
  0 siblings, 3 replies; 27+ messages in thread
From: Hanumanth Pothula @ 2022-11-17 12:55 UTC (permalink / raw)
  To: Aman Singh, Yuying Zhang
  Cc: dev, andrew.rybchenko, thomas, -yux.jiang, jerinj, ndabilpuram, hpothula

Validate ethdev parameter 'max_rx_mempools' to know wheater device
supports multi-mempool feature or not.

Bugzilla ID: 1128

Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
v2:
 - Rebased on tip of next-net/main
---
 app/test-pmd/testpmd.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 4e25f77c6a..fd634bd5e6 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -2655,16 +2655,22 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 	union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
 	struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
 	struct rte_mempool *mpx;
+	struct rte_eth_dev_info dev_info;
 	unsigned int i, mp_n;
 	uint32_t prev_hdrs = 0;
 	int ret;
 
+	ret = rte_eth_dev_info_get(port_id, &dev_info);
+	if (ret != 0)
+		return ret;
+
 	/* Verify Rx queue configuration is single pool and segment or
 	 * multiple pool/segment.
+	 * @see rte_eth_dev_info::max_rx_mempools
 	 * @see rte_eth_rxconf::rx_mempools
 	 * @see rte_eth_rxconf::rx_seg
 	 */
-	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
+	if (!(dev_info.max_rx_mempools != 0) && !(rx_pkt_nb_segs > 1 ||
 	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
 		/* Single pool/segment configuration */
 		rx_conf->rx_seg = NULL;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-17 12:55 ` [PATCH v2 " Hanumanth Pothula
@ 2022-11-17 15:00   ` Singh, Aman Deep
  2022-11-17 15:58     ` [EXT] " Hanumanth Reddy Pothula
  2022-11-17 16:03   ` [PATCH v3 " Hanumanth Pothula
  2022-11-18 11:13   ` Hanumanth Pothula
  2 siblings, 1 reply; 27+ messages in thread
From: Singh, Aman Deep @ 2022-11-17 15:00 UTC (permalink / raw)
  To: Hanumanth Pothula, Yuying Zhang
  Cc: dev, andrew.rybchenko, thomas, -yux.jiang, jerinj, ndabilpuram



On 11/17/2022 6:25 PM, Hanumanth Pothula wrote:
> Validate ethdev parameter 'max_rx_mempools' to know wheater device
> supports multi-mempool feature or not.

Spell check: whether

> Bugzilla ID: 1128
>
> Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>

Tested-by: Aman Singh <aman.deep.singh@intel.com>

> v2:
>   - Rebased on tip of next-net/main
> ---
>   app/test-pmd/testpmd.c | 8 +++++++-
>   1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 4e25f77c6a..fd634bd5e6 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -2655,16 +2655,22 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
>   	union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
>   	struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
>   	struct rte_mempool *mpx;
> +	struct rte_eth_dev_info dev_info;
>   	unsigned int i, mp_n;
>   	uint32_t prev_hdrs = 0;
>   	int ret;
>   
> +	ret = rte_eth_dev_info_get(port_id, &dev_info);
> +	if (ret != 0)
> +		return ret;
> +
>   	/* Verify Rx queue configuration is single pool and segment or
>   	 * multiple pool/segment.
> +	 * @see rte_eth_dev_info::max_rx_mempools
>   	 * @see rte_eth_rxconf::rx_mempools
>   	 * @see rte_eth_rxconf::rx_seg
>   	 */
> -	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> +	if (!(dev_info.max_rx_mempools != 0) && !(rx_pkt_nb_segs > 1 ||

Can we make the check simpler "(dev_info.max_rx_mempools == 0)"

>   	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
>   		/* Single pool/segment configuration */
>   		rx_conf->rx_seg = NULL;


^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [EXT] Re: [PATCH v2 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-17 15:00   ` Singh, Aman Deep
@ 2022-11-17 15:58     ` Hanumanth Reddy Pothula
  0 siblings, 0 replies; 27+ messages in thread
From: Hanumanth Reddy Pothula @ 2022-11-17 15:58 UTC (permalink / raw)
  To: Singh, Aman Deep, Yuying Zhang
  Cc: dev, andrew.rybchenko, thomas, -yux.jiang,
	Jerin Jacob Kollanukkaran, Nithin Kumar Dabilpuram



> -----Original Message-----
> From: Singh, Aman Deep <aman.deep.singh@intel.com>
> Sent: Thursday, November 17, 2022 8:30 PM
> To: Hanumanth Reddy Pothula <hpothula@marvell.com>; Yuying Zhang
> <yuying.zhang@intel.com>
> Cc: dev@dpdk.org; andrew.rybchenko@oktetlabs.ru;
> thomas@monjalon.net; -yux.jiang@intel.com; Jerin Jacob Kollanukkaran
> <jerinj@marvell.com>; Nithin Kumar Dabilpuram
> <ndabilpuram@marvell.com>
> Subject: [EXT] Re: [PATCH v2 1/1] app/testpmd: add valid check to verify
> multi mempool feature
> 
> External Email
> 
> ----------------------------------------------------------------------
> 
> 
> On 11/17/2022 6:25 PM, Hanumanth Pothula wrote:
> > Validate ethdev parameter 'max_rx_mempools' to know wheater device
> > supports multi-mempool feature or not.
> 
> Spell check: whether
> 
> > Bugzilla ID: 1128
> >
> > Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
> 
> Tested-by: Aman Singh <aman.deep.singh@intel.com>
> 
> > v2:
> >   - Rebased on tip of next-net/main
> > ---
> >   app/test-pmd/testpmd.c | 8 +++++++-
> >   1 file changed, 7 insertions(+), 1 deletion(-)
> >
> > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > 4e25f77c6a..fd634bd5e6 100644
> > --- a/app/test-pmd/testpmd.c
> > +++ b/app/test-pmd/testpmd.c
> > @@ -2655,16 +2655,22 @@ rx_queue_setup(uint16_t port_id, uint16_t
> rx_queue_id,
> >   	union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
> >   	struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
> >   	struct rte_mempool *mpx;
> > +	struct rte_eth_dev_info dev_info;
> >   	unsigned int i, mp_n;
> >   	uint32_t prev_hdrs = 0;
> >   	int ret;
> >
> > +	ret = rte_eth_dev_info_get(port_id, &dev_info);
> > +	if (ret != 0)
> > +		return ret;
> > +
> >   	/* Verify Rx queue configuration is single pool and segment or
> >   	 * multiple pool/segment.
> > +	 * @see rte_eth_dev_info::max_rx_mempools
> >   	 * @see rte_eth_rxconf::rx_mempools
> >   	 * @see rte_eth_rxconf::rx_seg
> >   	 */
> > -	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> > +	if (!(dev_info.max_rx_mempools != 0) && !(rx_pkt_nb_segs > 1 ||
> 
> Can we make the check simpler "(dev_info.max_rx_mempools == 0)"
> 
Sure, will simplify the condition.

> >   	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) !=
> 0))) {
> >   		/* Single pool/segment configuration */
> >   		rx_conf->rx_seg = NULL;


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v3 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-17 12:55 ` [PATCH v2 " Hanumanth Pothula
  2022-11-17 15:00   ` Singh, Aman Deep
@ 2022-11-17 16:03   ` Hanumanth Pothula
  2022-11-17 23:36     ` Ferruh Yigit
  2022-11-18 11:13   ` Hanumanth Pothula
  2 siblings, 1 reply; 27+ messages in thread
From: Hanumanth Pothula @ 2022-11-17 16:03 UTC (permalink / raw)
  To: Aman Singh, Yuying Zhang
  Cc: dev, andrew.rybchenko, thomas, yux.jiang, jerinj, ndabilpuram, hpothula

Validate ethdev parameter 'max_rx_mempools' to know whether
device supports multi-mempool feature or not.

Bugzilla ID: 1128

Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
v3:
 - Simplified conditional check.
 - Corrected spell, whether.
v2:
 - Rebased on tip of next-net/main.
---
 app/test-pmd/testpmd.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 4e25f77c6a..6c3d0948ec 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -2655,16 +2655,22 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 	union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
 	struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
 	struct rte_mempool *mpx;
+	struct rte_eth_dev_info dev_info;
 	unsigned int i, mp_n;
 	uint32_t prev_hdrs = 0;
 	int ret;
 
+	ret = rte_eth_dev_info_get(port_id, &dev_info);
+	if (ret != 0)
+		return ret;
+
 	/* Verify Rx queue configuration is single pool and segment or
 	 * multiple pool/segment.
+	 * @see rte_eth_dev_info::max_rx_mempools
 	 * @see rte_eth_rxconf::rx_mempools
 	 * @see rte_eth_rxconf::rx_seg
 	 */
-	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
+	if ((dev_info.max_rx_mempools == 0) && !(rx_pkt_nb_segs > 1 ||
 	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
 		/* Single pool/segment configuration */
 		rx_conf->rx_seg = NULL;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-17 16:03   ` [PATCH v3 " Hanumanth Pothula
@ 2022-11-17 23:36     ` Ferruh Yigit
  2022-11-18  6:51       ` Han, YingyaX
  0 siblings, 1 reply; 27+ messages in thread
From: Ferruh Yigit @ 2022-11-17 23:36 UTC (permalink / raw)
  To: Hanumanth Pothula, Aman Singh, Yuying Zhang, yingyax.han, yux.jiang
  Cc: dev, andrew.rybchenko, thomas, jerinj, ndabilpuram

On 11/17/2022 4:03 PM, Hanumanth Pothula wrote:
> Validate ethdev parameter 'max_rx_mempools' to know whether
> device supports multi-mempool feature or not.
> 
> Bugzilla ID: 1128
> 
> Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
> v3:
>  - Simplified conditional check.
>  - Corrected spell, whether.
> v2:
>  - Rebased on tip of next-net/main.
> ---
>  app/test-pmd/testpmd.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 4e25f77c6a..6c3d0948ec 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -2655,16 +2655,22 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
>  	union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
>  	struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
>  	struct rte_mempool *mpx;
> +	struct rte_eth_dev_info dev_info;
>  	unsigned int i, mp_n;
>  	uint32_t prev_hdrs = 0;
>  	int ret;
>  
> +	ret = rte_eth_dev_info_get(port_id, &dev_info);
> +	if (ret != 0)
> +		return ret;
> +
>  	/* Verify Rx queue configuration is single pool and segment or
>  	 * multiple pool/segment.
> +	 * @see rte_eth_dev_info::max_rx_mempools
>  	 * @see rte_eth_rxconf::rx_mempools
>  	 * @see rte_eth_rxconf::rx_seg
>  	 */
> -	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> +	if ((dev_info.max_rx_mempools == 0) && !(rx_pkt_nb_segs > 1 ||
>  	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
>  		/* Single pool/segment configuration */
>  		rx_conf->rx_seg = NULL;


Hi Yingya, Yu,

Can you please verify this patch?

Thanks,
ferruh

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [PATCH v3 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-17 23:36     ` Ferruh Yigit
@ 2022-11-18  6:51       ` Han, YingyaX
  2022-11-18 11:37         ` Hanumanth Reddy Pothula
  0 siblings, 1 reply; 27+ messages in thread
From: Han, YingyaX @ 2022-11-18  6:51 UTC (permalink / raw)
  To: Ferruh Yigit, Hanumanth Pothula, Singh, Aman Deep, Zhang, Yuying,
	Jiang, YuX
  Cc: dev, andrew.rybchenko, thomas, jerinj, ndabilpuram

There is a new issue after applying the patch.
Failed to configure buffer_split for a single queue and port can't up.
The test steps and logs are as follows:
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-9 -n 4  -a 31:00.0 --force-max-simd-bitwidth=64 -- -i --mbuf-size=2048,2048 --txq=4 --rxq=4
testpmd> port stop all
testpmd> port 0 rxq 2 rx_offload buffer_split on
testpmd> show port 0 rx_offload configuration
Rx Offloading Configuration of port 0 :
  Port : RSS_HASH
  Queue[ 0] : RSS_HASH
  Queue[ 1] : RSS_HASH
  Queue[ 2] : RSS_HASH BUFFER_SPLIT
  Queue[ 3] : RSS_HASH
testpmd> set rxhdrs eth
testpmd> port start all
Configuring Port 0 (socket 0)
No Rx segmentation offload configured
Fail to configure port 0 rx queues

BRs,
Yingya

-----Original Message-----
From: Ferruh Yigit <ferruh.yigit@amd.com> 
Sent: Friday, November 18, 2022 7:37 AM
To: Hanumanth Pothula <hpothula@marvell.com>; Singh, Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>; Han, YingyaX <yingyax.han@intel.com>; Jiang, YuX <yux.jiang@intel.com>
Cc: dev@dpdk.org; andrew.rybchenko@oktetlabs.ru; thomas@monjalon.net; jerinj@marvell.com; ndabilpuram@marvell.com
Subject: Re: [PATCH v3 1/1] app/testpmd: add valid check to verify multi mempool feature

On 11/17/2022 4:03 PM, Hanumanth Pothula wrote:
> Validate ethdev parameter 'max_rx_mempools' to know whether device 
> supports multi-mempool feature or not.
> 
> Bugzilla ID: 1128
> 
> Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
> v3:
>  - Simplified conditional check.
>  - Corrected spell, whether.
> v2:
>  - Rebased on tip of next-net/main.
> ---
>  app/test-pmd/testpmd.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 
> 4e25f77c6a..6c3d0948ec 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -2655,16 +2655,22 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
>  	union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
>  	struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
>  	struct rte_mempool *mpx;
> +	struct rte_eth_dev_info dev_info;
>  	unsigned int i, mp_n;
>  	uint32_t prev_hdrs = 0;
>  	int ret;
>  
> +	ret = rte_eth_dev_info_get(port_id, &dev_info);
> +	if (ret != 0)
> +		return ret;
> +
>  	/* Verify Rx queue configuration is single pool and segment or
>  	 * multiple pool/segment.
> +	 * @see rte_eth_dev_info::max_rx_mempools
>  	 * @see rte_eth_rxconf::rx_mempools
>  	 * @see rte_eth_rxconf::rx_seg
>  	 */
> -	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> +	if ((dev_info.max_rx_mempools == 0) && !(rx_pkt_nb_segs > 1 ||
>  	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
>  		/* Single pool/segment configuration */
>  		rx_conf->rx_seg = NULL;


Hi Yingya, Yu,

Can you please verify this patch?

Thanks,
ferruh

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v3 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-17 12:55 ` [PATCH v2 " Hanumanth Pothula
  2022-11-17 15:00   ` Singh, Aman Deep
  2022-11-17 16:03   ` [PATCH v3 " Hanumanth Pothula
@ 2022-11-18 11:13   ` Hanumanth Pothula
  2022-11-18 14:13     ` [PATCH v4 " Hanumanth Pothula
  2 siblings, 1 reply; 27+ messages in thread
From: Hanumanth Pothula @ 2022-11-18 11:13 UTC (permalink / raw)
  To: Aman Singh, Yuying Zhang
  Cc: dev, andrew.rybchenko, thomas, yux.jiang, jerinj, ndabilpuram, hpothula

Validate ethdev parameter 'max_rx_mempools' to know whether
device supports multi-mempool feature or not.

Bugzilla ID: 1128

Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
v4:
 - updated if condition.
v3:
 - Simplified conditional check.
 - Corrected spell, whether.
v2:
 - Rebased on tip of next-net/main.
---
 app/test-pmd/testpmd.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 4e25f77c6a..9fc14e6d6b 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -2655,17 +2655,23 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 	union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
 	struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
 	struct rte_mempool *mpx;
+	struct rte_eth_dev_info dev_info;
 	unsigned int i, mp_n;
 	uint32_t prev_hdrs = 0;
 	int ret;
 
+	ret = rte_eth_dev_info_get(port_id, &dev_info);
+	if (ret != 0)
+		return ret;
+
 	/* Verify Rx queue configuration is single pool and segment or
 	 * multiple pool/segment.
+	 * @see rte_eth_dev_info::max_rx_mempools
 	 * @see rte_eth_rxconf::rx_mempools
 	 * @see rte_eth_rxconf::rx_seg
 	 */
-	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
-	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
+	if ((dev_info.max_rx_mempools == 0) && (rx_pkt_nb_segs < 1 ||
+	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0))) {
 		/* Single pool/segment configuration */
 		rx_conf->rx_seg = NULL;
 		rx_conf->rx_nseg = 0;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* RE: [PATCH v3 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-18  6:51       ` Han, YingyaX
@ 2022-11-18 11:37         ` Hanumanth Reddy Pothula
  0 siblings, 0 replies; 27+ messages in thread
From: Hanumanth Reddy Pothula @ 2022-11-18 11:37 UTC (permalink / raw)
  To: Han, YingyaX, Ferruh Yigit, Singh, Aman Deep, Zhang, Yuying, Jiang, YuX
  Cc: dev, andrew.rybchenko, thomas, Jerin Jacob Kollanukkaran,
	Nithin Kumar Dabilpuram

Hi Yingya,
Uploaded new patch-set, can you please help in verifying the same,
https://patches.dpdk.org/project/dpdk/patch/20221118111358.3563760-1-hpothula@marvell.com/

Regards,
Hanumanth


> -----Original Message-----
> From: Han, YingyaX <yingyax.han@intel.com>
> Sent: Friday, November 18, 2022 12:22 PM
> To: Ferruh Yigit <ferruh.yigit@amd.com>; Hanumanth Reddy Pothula
> <hpothula@marvell.com>; Singh, Aman Deep
> <aman.deep.singh@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>;
> Jiang, YuX <yux.jiang@intel.com>
> Cc: dev@dpdk.org; andrew.rybchenko@oktetlabs.ru;
> thomas@monjalon.net; Jerin Jacob Kollanukkaran <jerinj@marvell.com>;
> Nithin Kumar Dabilpuram <ndabilpuram@marvell.com>
> Subject: [EXT] RE: [PATCH v3 1/1] app/testpmd: add valid check to verify
> multi mempool feature
> 
> External Email
> 
> ----------------------------------------------------------------------
> There is a new issue after applying the patch.
> Failed to configure buffer_split for a single queue and port can't up.
> The test steps and logs are as follows:
> ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-9 -n 4  -a 31:00.0 --
> force-max-simd-bitwidth=64 -- -i --mbuf-size=2048,2048 --txq=4 --rxq=4
> testpmd> port stop all
> testpmd> port 0 rxq 2 rx_offload buffer_split on show port 0 rx_offload
> testpmd> configuration
> Rx Offloading Configuration of port 0 :
>   Port : RSS_HASH
>   Queue[ 0] : RSS_HASH
>   Queue[ 1] : RSS_HASH
>   Queue[ 2] : RSS_HASH BUFFER_SPLIT
>   Queue[ 3] : RSS_HASH
> testpmd> set rxhdrs eth
> testpmd> port start all
> Configuring Port 0 (socket 0)
> No Rx segmentation offload configured
> Fail to configure port 0 rx queues
> 
> BRs,
> Yingya
> 
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Friday, November 18, 2022 7:37 AM
> To: Hanumanth Pothula <hpothula@marvell.com>; Singh, Aman Deep
> <aman.deep.singh@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>;
> Han, YingyaX <yingyax.han@intel.com>; Jiang, YuX <yux.jiang@intel.com>
> Cc: dev@dpdk.org; andrew.rybchenko@oktetlabs.ru;
> thomas@monjalon.net; jerinj@marvell.com; ndabilpuram@marvell.com
> Subject: Re: [PATCH v3 1/1] app/testpmd: add valid check to verify multi
> mempool feature
> 
> On 11/17/2022 4:03 PM, Hanumanth Pothula wrote:
> > Validate ethdev parameter 'max_rx_mempools' to know whether device
> > supports multi-mempool feature or not.
> >
> > Bugzilla ID: 1128
> >
> > Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
> > v3:
> >  - Simplified conditional check.
> >  - Corrected spell, whether.
> > v2:
> >  - Rebased on tip of next-net/main.
> > ---
> >  app/test-pmd/testpmd.c | 8 +++++++-
> >  1 file changed, 7 insertions(+), 1 deletion(-)
> >
> > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > 4e25f77c6a..6c3d0948ec 100644
> > --- a/app/test-pmd/testpmd.c
> > +++ b/app/test-pmd/testpmd.c
> > @@ -2655,16 +2655,22 @@ rx_queue_setup(uint16_t port_id, uint16_t
> rx_queue_id,
> >  	union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
> >  	struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
> >  	struct rte_mempool *mpx;
> > +	struct rte_eth_dev_info dev_info;
> >  	unsigned int i, mp_n;
> >  	uint32_t prev_hdrs = 0;
> >  	int ret;
> >
> > +	ret = rte_eth_dev_info_get(port_id, &dev_info);
> > +	if (ret != 0)
> > +		return ret;
> > +
> >  	/* Verify Rx queue configuration is single pool and segment or
> >  	 * multiple pool/segment.
> > +	 * @see rte_eth_dev_info::max_rx_mempools
> >  	 * @see rte_eth_rxconf::rx_mempools
> >  	 * @see rte_eth_rxconf::rx_seg
> >  	 */
> > -	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> > +	if ((dev_info.max_rx_mempools == 0) && !(rx_pkt_nb_segs > 1 ||
> >  	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) !=
> 0))) {
> >  		/* Single pool/segment configuration */
> >  		rx_conf->rx_seg = NULL;
> 
> 
> Hi Yingya, Yu,
> 
> Can you please verify this patch?
> 
> Thanks,
> ferruh

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v4 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-18 11:13   ` Hanumanth Pothula
@ 2022-11-18 14:13     ` Hanumanth Pothula
  2022-11-18 20:56       ` Ferruh Yigit
  2022-11-21 12:45       ` [PATCH v5 " Hanumanth Pothula
  0 siblings, 2 replies; 27+ messages in thread
From: Hanumanth Pothula @ 2022-11-18 14:13 UTC (permalink / raw)
  To: Aman Singh, Yuying Zhang
  Cc: dev, andrew.rybchenko, thomas, yux.jiang, jerinj, ndabilpuram, hpothula

Validate ethdev parameter 'max_rx_mempools' to know whether
device supports multi-mempool feature or not.

Bugzilla ID: 1128

Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
v4:
 - updated if condition.
v3:
 - Simplified conditional check.
 - Corrected spell, whether.
v2:
 - Rebased on tip of next-net/main.
---
 app/test-pmd/testpmd.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 4e25f77c6a..c1b4dbd716 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -2655,17 +2655,23 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 	union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
 	struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
 	struct rte_mempool *mpx;
+	struct rte_eth_dev_info dev_info;
 	unsigned int i, mp_n;
 	uint32_t prev_hdrs = 0;
 	int ret;
 
+	ret = rte_eth_dev_info_get(port_id, &dev_info);
+	if (ret != 0)
+		return ret;
+
 	/* Verify Rx queue configuration is single pool and segment or
 	 * multiple pool/segment.
+	 * @see rte_eth_dev_info::max_rx_mempools
 	 * @see rte_eth_rxconf::rx_mempools
 	 * @see rte_eth_rxconf::rx_seg
 	 */
-	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
-	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
+	if ((dev_info.max_rx_mempools == 0) && (rx_pkt_nb_segs <= 1 ||
+	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0))) {
 		/* Single pool/segment configuration */
 		rx_conf->rx_seg = NULL;
 		rx_conf->rx_nseg = 0;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH v4 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-18 14:13     ` [PATCH v4 " Hanumanth Pothula
@ 2022-11-18 20:56       ` Ferruh Yigit
  2022-11-19  0:00         ` [EXT] " Hanumanth Reddy Pothula
  2022-11-21 12:45       ` [PATCH v5 " Hanumanth Pothula
  1 sibling, 1 reply; 27+ messages in thread
From: Ferruh Yigit @ 2022-11-18 20:56 UTC (permalink / raw)
  To: Hanumanth Pothula, thomas, andrew.rybchenko, ndabilpuram
  Cc: dev, yux.jiang, jerinj, Aman Singh, Yuying Zhang

On 11/18/2022 2:13 PM, Hanumanth Pothula wrote:
> Validate ethdev parameter 'max_rx_mempools' to know whether
> device supports multi-mempool feature or not.
> 

My preference would be revert the testpmd patch [1] that adds this new
feature after -rc2, and add it back next release with new testpmd
argument and below mentioned changes in setup function.

@Andrew, @Thomas, @Jerin, what do you think?


[1]
4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue")

> Bugzilla ID: 1128
> 

Can you please add fixes line?

> Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>

Please put the changelog after '---', which than git will take it as note.

> v4:
>  - updated if condition.
> v3:
>  - Simplified conditional check.
>  - Corrected spell, whether.
> v2:
>  - Rebased on tip of next-net/main.
> ---
>  app/test-pmd/testpmd.c | 10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 4e25f77c6a..c1b4dbd716 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -2655,17 +2655,23 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
>  	union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
>  	struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
>  	struct rte_mempool *mpx;
> +	struct rte_eth_dev_info dev_info;
>  	unsigned int i, mp_n;
>  	uint32_t prev_hdrs = 0;
>  	int ret;
>  
> +	ret = rte_eth_dev_info_get(port_id, &dev_info);
> +	if (ret != 0)
> +		return ret;
> +
>  	/* Verify Rx queue configuration is single pool and segment or
>  	 * multiple pool/segment.
> +	 * @see rte_eth_dev_info::max_rx_mempools
>  	 * @see rte_eth_rxconf::rx_mempools
>  	 * @see rte_eth_rxconf::rx_seg
>  	 */
> -	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> -	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
> +	if ((dev_info.max_rx_mempools == 0) && (rx_pkt_nb_segs <= 1 ||

Using `dev_info.max_rx_mempools` for check means if device supports
multiple mempool, multiple mempool will be configured independent from
user configuration. But user may prefer singe mempool or buffer split.

Right now only PMD support multiple mempool is 'cnxk', so this doesn't
impact others but I think this is not correct.

Instead of re-using testpmd "mbuf-size" parameter (it is already used
for two other features, and this is the reason of the defect) it would
be better to have an explicit parameter for multiple mempool feature.


> +	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0))) {
>  		/* Single pool/segment configuration */
>  		rx_conf->rx_seg = NULL;
>  		rx_conf->rx_nseg = 0;


Logic seems correct, although I have not tested.

Current functions tries to detect the requested feature and setup queues
accordingly, features are:
- single mempool
- packet split (to multiple mempool)
- multiple mempool (various size)

And the logic in the function is:
``
if ( (! multiple mempool) && (! packet split))
	setup for single mempool
	exit

if (packet split)
	setup packet split
else
	setup multiple mempool
``

What do you think to
a) simplify logic by making single mempool as fallback and last option,
instead of detecting non existence of other configs
b) have explicit check for multiple mempool

Like:

``
if (packet split)
	setup packet split
	exit
else if (multiple mempool)
	setup multiple mempool
	exit

setup for single mempool
``

I think this both solves the defect and simplifies the code.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [EXT] Re: [PATCH v4 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-18 20:56       ` Ferruh Yigit
@ 2022-11-19  0:00         ` Hanumanth Reddy Pothula
  2022-11-21 10:08           ` Ferruh Yigit
  0 siblings, 1 reply; 27+ messages in thread
From: Hanumanth Reddy Pothula @ 2022-11-19  0:00 UTC (permalink / raw)
  To: Ferruh Yigit, thomas, andrew.rybchenko, Nithin Kumar Dabilpuram
  Cc: dev, yux.jiang, Jerin Jacob Kollanukkaran, Aman Singh, Yuying Zhang



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Saturday, November 19, 2022 2:26 AM
> To: Hanumanth Reddy Pothula <hpothula@marvell.com>;
> thomas@monjalon.net; andrew.rybchenko@oktetlabs.ru; Nithin Kumar
> Dabilpuram <ndabilpuram@marvell.com>
> Cc: dev@dpdk.org; yux.jiang@intel.com; Jerin Jacob Kollanukkaran
> <jerinj@marvell.com>; Aman Singh <aman.deep.singh@intel.com>; Yuying
> Zhang <yuying.zhang@intel.com>
> Subject: [EXT] Re: [PATCH v4 1/1] app/testpmd: add valid check to verify
> multi mempool feature
> 
> External Email
> 
> ----------------------------------------------------------------------
> On 11/18/2022 2:13 PM, Hanumanth Pothula wrote:
> > Validate ethdev parameter 'max_rx_mempools' to know whether device
> > supports multi-mempool feature or not.
> >
> 
> My preference would be revert the testpmd patch [1] that adds this new
> feature after -rc2, and add it back next release with new testpmd argument
> and below mentioned changes in setup function.
> 
> @Andrew, @Thomas, @Jerin, what do you think?
> 
> 
> [1]
> 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue")
> 
> > Bugzilla ID: 1128
> >
> 
> Can you please add fixes line?
> 
Ack
> > Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
> 
> Please put the changelog after '---', which than git will take it as note.
> 
Ack
> > v4:
> >  - updated if condition.
> > v3:
> >  - Simplified conditional check.
> >  - Corrected spell, whether.
> > v2:
> >  - Rebased on tip of next-net/main.
> > ---
> >  app/test-pmd/testpmd.c | 10 ++++++++--
> >  1 file changed, 8 insertions(+), 2 deletions(-)
> >
> > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > 4e25f77c6a..c1b4dbd716 100644
> > --- a/app/test-pmd/testpmd.c
> > +++ b/app/test-pmd/testpmd.c
> > @@ -2655,17 +2655,23 @@ rx_queue_setup(uint16_t port_id, uint16_t
> rx_queue_id,
> >  	union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
> >  	struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
> >  	struct rte_mempool *mpx;
> > +	struct rte_eth_dev_info dev_info;
> >  	unsigned int i, mp_n;
> >  	uint32_t prev_hdrs = 0;
> >  	int ret;
> >
> > +	ret = rte_eth_dev_info_get(port_id, &dev_info);
> > +	if (ret != 0)
> > +		return ret;
> > +
> >  	/* Verify Rx queue configuration is single pool and segment or
> >  	 * multiple pool/segment.
> > +	 * @see rte_eth_dev_info::max_rx_mempools
> >  	 * @see rte_eth_rxconf::rx_mempools
> >  	 * @see rte_eth_rxconf::rx_seg
> >  	 */
> > -	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> > -	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) !=
> 0))) {
> > +	if ((dev_info.max_rx_mempools == 0) && (rx_pkt_nb_segs <= 1 ||
> 
> Using `dev_info.max_rx_mempools` for check means if device supports
> multiple mempool, multiple mempool will be configured independent from
> user configuration. But user may prefer singe mempool or buffer split.
> 
Please find my suggested logic.

> Right now only PMD support multiple mempool is 'cnxk', so this doesn't
> impact others but I think this is not correct.
> 
> Instead of re-using testpmd "mbuf-size" parameter (it is already used for
> two other features, and this is the reason of the defect) it would be better
> to have an explicit parameter for multiple mempool feature.
> 
> 
> > +	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) ==
> 0))) {
> >  		/* Single pool/segment configuration */
> >  		rx_conf->rx_seg = NULL;
> >  		rx_conf->rx_nseg = 0;
> 
> 
> Logic seems correct, although I have not tested.
> 
> Current functions tries to detect the requested feature and setup queues
> accordingly, features are:
> - single mempool
> - packet split (to multiple mempool)
> - multiple mempool (various size)
> 
> And the logic in the function is:
> ``
> if ( (! multiple mempool) && (! packet split))
> 	setup for single mempool
> 	exit
> 
> if (packet split)
> 	setup packet split
> else
> 	setup multiple mempool
> ``
> 
> What do you think to
> a) simplify logic by making single mempool as fallback and last option,
> instead of detecting non existence of other configs
> b) have explicit check for multiple mempool
> 
> Like:
> 
> ``
> if (packet split)
> 	setup packet split
> 	exit
> else if (multiple mempool)
> 	setup multiple mempool
> 	exit
> 
> setup for single mempool
> ``
> 
> I think this both solves the defect and simplifies the code.

Yes Ferruh your suggested logic simplifies the code.

In the lines of your proposed logic,  below if conditions might work fine for all features(buffer-split/multi-mempool) supported by PMD and user preference,

if (rx_pkt_nb_segs > 1 ||
            rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
	/*multi-segment (buffer split)*/
} else if (mbuf_data_size_n > 1 && dev_info.max_rx_mempools > 1) {
	/*multi-mempool*/
} else {
	/* single pool and segment */
} 

Or  adding new Rx offload parameter for multi_mempool feature, I think it might not be required, using dev_info.max_rx_mempools works fine.

if (rx_pkt_nb_segs > 1 ||
            rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
	/*multi-segment (buffer split)*/
} else if (mbuf_data_size_n > 1 && rx_conf->offloads & RTE_ETH_RX_OFFLOAD_MULTI_MEMPOOL ) {
	/*multi-mempool*/
} else {
	/* single pool and segment */
}

Please let me know your inputs on above logic.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [EXT] Re: [PATCH v4 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-19  0:00         ` [EXT] " Hanumanth Reddy Pothula
@ 2022-11-21 10:08           ` Ferruh Yigit
  2022-11-21 10:44             ` Hanumanth Reddy Pothula
  0 siblings, 1 reply; 27+ messages in thread
From: Ferruh Yigit @ 2022-11-21 10:08 UTC (permalink / raw)
  To: Hanumanth Reddy Pothula, thomas, andrew.rybchenko,
	Nithin Kumar Dabilpuram
  Cc: dev, yux.jiang, Jerin Jacob Kollanukkaran, Aman Singh, Yuying Zhang

On 11/19/2022 12:00 AM, Hanumanth Reddy Pothula wrote:
> 
> 
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@amd.com>
>> Sent: Saturday, November 19, 2022 2:26 AM
>> To: Hanumanth Reddy Pothula <hpothula@marvell.com>;
>> thomas@monjalon.net; andrew.rybchenko@oktetlabs.ru; Nithin Kumar
>> Dabilpuram <ndabilpuram@marvell.com>
>> Cc: dev@dpdk.org; yux.jiang@intel.com; Jerin Jacob Kollanukkaran
>> <jerinj@marvell.com>; Aman Singh <aman.deep.singh@intel.com>; Yuying
>> Zhang <yuying.zhang@intel.com>
>> Subject: [EXT] Re: [PATCH v4 1/1] app/testpmd: add valid check to verify
>> multi mempool feature
>>
>> External Email
>>
>> ----------------------------------------------------------------------
>> On 11/18/2022 2:13 PM, Hanumanth Pothula wrote:
>>> Validate ethdev parameter 'max_rx_mempools' to know whether device
>>> supports multi-mempool feature or not.
>>>
>>
>> My preference would be revert the testpmd patch [1] that adds this new
>> feature after -rc2, and add it back next release with new testpmd argument
>> and below mentioned changes in setup function.
>>
>> @Andrew, @Thomas, @Jerin, what do you think?
>>
>>
>> [1]
>> 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue")
>>
>>> Bugzilla ID: 1128
>>>
>>
>> Can you please add fixes line?
>>
> Ack
>>> Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
>>
>> Please put the changelog after '---', which than git will take it as note.
>>
> Ack
>>> v4:
>>>  - updated if condition.
>>> v3:
>>>  - Simplified conditional check.
>>>  - Corrected spell, whether.
>>> v2:
>>>  - Rebased on tip of next-net/main.
>>> ---
>>>  app/test-pmd/testpmd.c | 10 ++++++++--
>>>  1 file changed, 8 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
>>> 4e25f77c6a..c1b4dbd716 100644
>>> --- a/app/test-pmd/testpmd.c
>>> +++ b/app/test-pmd/testpmd.c
>>> @@ -2655,17 +2655,23 @@ rx_queue_setup(uint16_t port_id, uint16_t
>> rx_queue_id,
>>>  	union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
>>>  	struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
>>>  	struct rte_mempool *mpx;
>>> +	struct rte_eth_dev_info dev_info;
>>>  	unsigned int i, mp_n;
>>>  	uint32_t prev_hdrs = 0;
>>>  	int ret;
>>>
>>> +	ret = rte_eth_dev_info_get(port_id, &dev_info);
>>> +	if (ret != 0)
>>> +		return ret;
>>> +
>>>  	/* Verify Rx queue configuration is single pool and segment or
>>>  	 * multiple pool/segment.
>>> +	 * @see rte_eth_dev_info::max_rx_mempools
>>>  	 * @see rte_eth_rxconf::rx_mempools
>>>  	 * @see rte_eth_rxconf::rx_seg
>>>  	 */
>>> -	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
>>> -	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) !=
>> 0))) {
>>> +	if ((dev_info.max_rx_mempools == 0) && (rx_pkt_nb_segs <= 1 ||
>>
>> Using `dev_info.max_rx_mempools` for check means if device supports
>> multiple mempool, multiple mempool will be configured independent from
>> user configuration. But user may prefer singe mempool or buffer split.
>>
> Please find my suggested logic.
> 
>> Right now only PMD support multiple mempool is 'cnxk', so this doesn't
>> impact others but I think this is not correct.
>>
>> Instead of re-using testpmd "mbuf-size" parameter (it is already used for
>> two other features, and this is the reason of the defect) it would be better
>> to have an explicit parameter for multiple mempool feature.
>>
>>
>>> +	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) ==
>> 0))) {
>>>  		/* Single pool/segment configuration */
>>>  		rx_conf->rx_seg = NULL;
>>>  		rx_conf->rx_nseg = 0;
>>
>>
>> Logic seems correct, although I have not tested.
>>
>> Current functions tries to detect the requested feature and setup queues
>> accordingly, features are:
>> - single mempool
>> - packet split (to multiple mempool)
>> - multiple mempool (various size)
>>
>> And the logic in the function is:
>> ``
>> if ( (! multiple mempool) && (! packet split))
>> 	setup for single mempool
>> 	exit
>>
>> if (packet split)
>> 	setup packet split
>> else
>> 	setup multiple mempool
>> ``
>>
>> What do you think to
>> a) simplify logic by making single mempool as fallback and last option,
>> instead of detecting non existence of other configs
>> b) have explicit check for multiple mempool
>>
>> Like:
>>
>> ``
>> if (packet split)
>> 	setup packet split
>> 	exit
>> else if (multiple mempool)
>> 	setup multiple mempool
>> 	exit
>>
>> setup for single mempool
>> ``
>>
>> I think this both solves the defect and simplifies the code.
> 
> Yes Ferruh your suggested logic simplifies the code.
> 
> In the lines of your proposed logic,  below if conditions might work fine for all features(buffer-split/multi-mempool) supported by PMD and user preference,
> 
> if (rx_pkt_nb_segs > 1 ||
>             rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
> 	/*multi-segment (buffer split)*/
> } else if (mbuf_data_size_n > 1 && dev_info.max_rx_mempools > 1) {
> 	/*multi-mempool*/
> } else {
> 	/* single pool and segment */
> } 
> 

`mbuf_data_size_n > 1` may mean user is requesting multiple segment, or
buffer split, so I am not sure about using this value to decide on
multiple mempool feature, it can create side effect as Bug 1128 does.

> Or  adding new Rx offload parameter for multi_mempool feature, I think it might not be required, using dev_info.max_rx_mempools works fine.
> 

In ethdev level, we don't need an offload flag, since separated config
options clarify the intention there.
What is needed is a way to understand users intention, for application
(testpmd) and configure device accordingly.
That is why I think 'dev_info.max_rx_mempools' is not working fine,
because that is a way for device to say multiple mempool is supported,
it is not to get user intention.
In your logic when device supports multiple mempool, use it independent
from what user request.

I suggest having a testpmd argument explicitly to say multiple mempool
feature is requested, this will help to distinguish buffer split,
multiple mempool, single mempool features.

Thanks.

> if (rx_pkt_nb_segs > 1 ||
>             rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
> 	/*multi-segment (buffer split)*/
> } else if (mbuf_data_size_n > 1 && rx_conf->offloads & RTE_ETH_RX_OFFLOAD_MULTI_MEMPOOL ) {
> 	/*multi-mempool*/
> } else {
> 	/* single pool and segment */
> }
> 
> Please let me know your inputs on above logic.
> 


^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [EXT] Re: [PATCH v4 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-21 10:08           ` Ferruh Yigit
@ 2022-11-21 10:44             ` Hanumanth Reddy Pothula
  0 siblings, 0 replies; 27+ messages in thread
From: Hanumanth Reddy Pothula @ 2022-11-21 10:44 UTC (permalink / raw)
  To: Ferruh Yigit, thomas, andrew.rybchenko, Nithin Kumar Dabilpuram
  Cc: dev, yux.jiang, Jerin Jacob Kollanukkaran, Aman Singh, Yuying Zhang



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Monday, November 21, 2022 3:38 PM
> To: Hanumanth Reddy Pothula <hpothula@marvell.com>;
> thomas@monjalon.net; andrew.rybchenko@oktetlabs.ru; Nithin Kumar
> Dabilpuram <ndabilpuram@marvell.com>
> Cc: dev@dpdk.org; yux.jiang@intel.com; Jerin Jacob Kollanukkaran
> <jerinj@marvell.com>; Aman Singh <aman.deep.singh@intel.com>; Yuying
> Zhang <yuying.zhang@intel.com>
> Subject: Re: [EXT] Re: [PATCH v4 1/1] app/testpmd: add valid check to
> verify multi mempool feature
> 
> On 11/19/2022 12:00 AM, Hanumanth Reddy Pothula wrote:
> >
> >
> >> -----Original Message-----
> >> From: Ferruh Yigit <ferruh.yigit@amd.com>
> >> Sent: Saturday, November 19, 2022 2:26 AM
> >> To: Hanumanth Reddy Pothula <hpothula@marvell.com>;
> >> thomas@monjalon.net; andrew.rybchenko@oktetlabs.ru; Nithin Kumar
> >> Dabilpuram <ndabilpuram@marvell.com>
> >> Cc: dev@dpdk.org; yux.jiang@intel.com; Jerin Jacob Kollanukkaran
> >> <jerinj@marvell.com>; Aman Singh <aman.deep.singh@intel.com>;
> Yuying
> >> Zhang <yuying.zhang@intel.com>
> >> Subject: [EXT] Re: [PATCH v4 1/1] app/testpmd: add valid check to
> >> verify multi mempool feature
> >>
> >> External Email
> >>
> >> ---------------------------------------------------------------------
> >> - On 11/18/2022 2:13 PM, Hanumanth Pothula wrote:
> >>> Validate ethdev parameter 'max_rx_mempools' to know whether
> device
> >>> supports multi-mempool feature or not.
> >>>
> >>
> >> My preference would be revert the testpmd patch [1] that adds this
> >> new feature after -rc2, and add it back next release with new testpmd
> >> argument and below mentioned changes in setup function.
> >>
> >> @Andrew, @Thomas, @Jerin, what do you think?
> >>
> >>
> >> [1]
> >> 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx
> >> queue")
> >>
> >>> Bugzilla ID: 1128
> >>>
> >>
> >> Can you please add fixes line?
> >>
> > Ack
> >>> Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
> >>
> >> Please put the changelog after '---', which than git will take it as note.
> >>
> > Ack
> >>> v4:
> >>>  - updated if condition.
> >>> v3:
> >>>  - Simplified conditional check.
> >>>  - Corrected spell, whether.
> >>> v2:
> >>>  - Rebased on tip of next-net/main.
> >>> ---
> >>>  app/test-pmd/testpmd.c | 10 ++++++++--
> >>>  1 file changed, 8 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> >>> 4e25f77c6a..c1b4dbd716 100644
> >>> --- a/app/test-pmd/testpmd.c
> >>> +++ b/app/test-pmd/testpmd.c
> >>> @@ -2655,17 +2655,23 @@ rx_queue_setup(uint16_t port_id,
> uint16_t
> >> rx_queue_id,
> >>>  	union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
> >>>  	struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
> >>>  	struct rte_mempool *mpx;
> >>> +	struct rte_eth_dev_info dev_info;
> >>>  	unsigned int i, mp_n;
> >>>  	uint32_t prev_hdrs = 0;
> >>>  	int ret;
> >>>
> >>> +	ret = rte_eth_dev_info_get(port_id, &dev_info);
> >>> +	if (ret != 0)
> >>> +		return ret;
> >>> +
> >>>  	/* Verify Rx queue configuration is single pool and segment or
> >>>  	 * multiple pool/segment.
> >>> +	 * @see rte_eth_dev_info::max_rx_mempools
> >>>  	 * @see rte_eth_rxconf::rx_mempools
> >>>  	 * @see rte_eth_rxconf::rx_seg
> >>>  	 */
> >>> -	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> >>> -	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) !=
> >> 0))) {
> >>> +	if ((dev_info.max_rx_mempools == 0) && (rx_pkt_nb_segs <= 1 ||
> >>
> >> Using `dev_info.max_rx_mempools` for check means if device supports
> >> multiple mempool, multiple mempool will be configured independent
> >> from user configuration. But user may prefer singe mempool or buffer
> split.
> >>
> > Please find my suggested logic.
> >
> >> Right now only PMD support multiple mempool is 'cnxk', so this
> >> doesn't impact others but I think this is not correct.
> >>
> >> Instead of re-using testpmd "mbuf-size" parameter (it is already used
> >> for two other features, and this is the reason of the defect) it
> >> would be better to have an explicit parameter for multiple mempool
> feature.
> >>
> >>
> >>> +	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) ==
> >> 0))) {
> >>>  		/* Single pool/segment configuration */
> >>>  		rx_conf->rx_seg = NULL;
> >>>  		rx_conf->rx_nseg = 0;
> >>
> >>
> >> Logic seems correct, although I have not tested.
> >>
> >> Current functions tries to detect the requested feature and setup
> >> queues accordingly, features are:
> >> - single mempool
> >> - packet split (to multiple mempool)
> >> - multiple mempool (various size)
> >>
> >> And the logic in the function is:
> >> ``
> >> if ( (! multiple mempool) && (! packet split))
> >> 	setup for single mempool
> >> 	exit
> >>
> >> if (packet split)
> >> 	setup packet split
> >> else
> >> 	setup multiple mempool
> >> ``
> >>
> >> What do you think to
> >> a) simplify logic by making single mempool as fallback and last
> >> option, instead of detecting non existence of other configs
> >> b) have explicit check for multiple mempool
> >>
> >> Like:
> >>
> >> ``
> >> if (packet split)
> >> 	setup packet split
> >> 	exit
> >> else if (multiple mempool)
> >> 	setup multiple mempool
> >> 	exit
> >>
> >> setup for single mempool
> >> ``
> >>
> >> I think this both solves the defect and simplifies the code.
> >
> > Yes Ferruh your suggested logic simplifies the code.
> >
> > In the lines of your proposed logic,  below if conditions might work
> > fine for all features(buffer-split/multi-mempool) supported by PMD and
> > user preference,
> >
> > if (rx_pkt_nb_segs > 1 ||
> >             rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
> > 	/*multi-segment (buffer split)*/
> > } else if (mbuf_data_size_n > 1 && dev_info.max_rx_mempools > 1) {
> > 	/*multi-mempool*/
> > } else {
> > 	/* single pool and segment */
> > }
> >
> 
> `mbuf_data_size_n > 1` may mean user is requesting multiple segment, or
> buffer split, so I am not sure about using this value to decide on
> multiple mempool feature, it can create side effect as Bug 1128 does.
> 
> > Or  adding new Rx offload parameter for multi_mempool feature, I think it
> might not be required, using dev_info.max_rx_mempools works fine.
> >
> 
> In ethdev level, we don't need an offload flag, since separated config
> options clarify the intention there.
> What is needed is a way to understand users intention, for application
> (testpmd) and configure device accordingly.
> That is why I think 'dev_info.max_rx_mempools' is not working fine,
> because that is a way for device to say multiple mempool is supported,
> it is not to get user intention.
> In your logic when device supports multiple mempool, use it independent
> from what user request.
> 
> I suggest having a testpmd argument explicitly to say multiple mempool
> feature is requested, this will help to distinguish buffer split,
> multiple mempool, single mempool features.
> 
> Thanks.

Sure, will upload new patch-set with testpmd argument, which tells multiple mempool feature is enabled/disabled.

> 
> > if (rx_pkt_nb_segs > 1 ||
> >             rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
> > 	/*multi-segment (buffer split)*/
> > } else if (mbuf_data_size_n > 1 && rx_conf->offloads &
> RTE_ETH_RX_OFFLOAD_MULTI_MEMPOOL ) {
> > 	/*multi-mempool*/
> > } else {
> > 	/* single pool and segment */
> > }
> >
> > Please let me know your inputs on above logic.
> >


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v5 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-18 14:13     ` [PATCH v4 " Hanumanth Pothula
  2022-11-18 20:56       ` Ferruh Yigit
@ 2022-11-21 12:45       ` Hanumanth Pothula
  2022-11-21 13:22         ` Ferruh Yigit
  2022-11-21 14:33         ` [PATCH v6 " Hanumanth Pothula
  1 sibling, 2 replies; 27+ messages in thread
From: Hanumanth Pothula @ 2022-11-21 12:45 UTC (permalink / raw)
  To: Aman Singh, Yuying Zhang
  Cc: dev, andrew.rybchenko, thomas, yux.jiang, jerinj, ndabilpuram, hpothula

Validate ethdev parameter 'max_rx_mempools' to know whether
device supports multi-mempool feature or not.

Also, add new testpmd command line argument, multi-mempool,
to control multi-mempool feature. By default its disabled.

Bugzilla ID: 1128
Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue")

Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>

---
v5:
 - Added testpmd argument to enable multi-mempool feature.
 - Simplified logic to distinguish between multi-mempool,
   multi-segment and single pool/segment.
v4:
 - updated if condition.
v3:
 - Simplified conditional check.
 - Corrected spell, whether.
v2:
 - Rebased on tip of next-net/main.
---
 app/test-pmd/parameters.c |  3 ++
 app/test-pmd/testpmd.c    | 58 +++++++++++++++++++++++++--------------
 app/test-pmd/testpmd.h    |  1 +
 3 files changed, 41 insertions(+), 21 deletions(-)

diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index aed4cdcb84..26d6450db4 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -700,6 +700,7 @@ launch_args_parse(int argc, char** argv)
 		{ "rx-mq-mode",                 1, 0, 0 },
 		{ "record-core-cycles",         0, 0, 0 },
 		{ "record-burst-stats",         0, 0, 0 },
+		{ "multi-mempool",              0, 0, 0 },
 		{ PARAM_NUM_PROCS,              1, 0, 0 },
 		{ PARAM_PROC_ID,                1, 0, 0 },
 		{ 0, 0, 0, 0 },
@@ -1449,6 +1450,8 @@ launch_args_parse(int argc, char** argv)
 				record_core_cycles = 1;
 			if (!strcmp(lgopts[opt_idx].name, "record-burst-stats"))
 				record_burst_stats = 1;
+			if (!strcmp(lgopts[opt_idx].name, "multi-mempool"))
+				multi_mempool = 1;
 			if (!strcmp(lgopts[opt_idx].name, PARAM_NUM_PROCS))
 				num_procs = atoi(optarg);
 			if (!strcmp(lgopts[opt_idx].name, PARAM_PROC_ID))
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 4e25f77c6a..9dfc4c9d0e 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -497,6 +497,11 @@ uint8_t record_burst_stats;
  */
 uint32_t rxq_share;
 
+/*
+ * Multi-mempool support, disabled by default.
+ */
+uint8_t multi_mempool;
+
 unsigned int num_sockets = 0;
 unsigned int socket_ids[RTE_MAX_NUMA_NODES];
 
@@ -2655,28 +2660,23 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 	union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
 	struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
 	struct rte_mempool *mpx;
+	struct rte_eth_dev_info dev_info;
 	unsigned int i, mp_n;
 	uint32_t prev_hdrs = 0;
 	int ret;
 
+	ret = rte_eth_dev_info_get(port_id, &dev_info);
+	if (ret != 0)
+		return ret;
+
 	/* Verify Rx queue configuration is single pool and segment or
 	 * multiple pool/segment.
+	 * @see rte_eth_dev_info::max_rx_mempools
 	 * @see rte_eth_rxconf::rx_mempools
 	 * @see rte_eth_rxconf::rx_seg
 	 */
-	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
-	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
-		/* Single pool/segment configuration */
-		rx_conf->rx_seg = NULL;
-		rx_conf->rx_nseg = 0;
-		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
-					     nb_rx_desc, socket_id,
-					     rx_conf, mp);
-		goto exit;
-	}
-
-	if (rx_pkt_nb_segs > 1 ||
-	    rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
+	if ((rx_pkt_nb_segs > 1) &&
+	    (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
 		/* multi-segment configuration */
 		for (i = 0; i < rx_pkt_nb_segs; i++) {
 			struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split;
@@ -2701,7 +2701,14 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 		}
 		rx_conf->rx_nseg = rx_pkt_nb_segs;
 		rx_conf->rx_seg = rx_useg;
-	} else {
+		rx_conf->rx_mempools = NULL;
+		rx_conf->rx_nmempool = 0;
+		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
+				    socket_id, rx_conf, NULL);
+		rx_conf->rx_seg = NULL;
+		rx_conf->rx_nseg = 0;
+	} else if ((multi_mempool == 1) && (dev_info.max_rx_mempools != 0) &&
+		  (mbuf_data_size_n > 1)) {
 		/* multi-pool configuration */
 		for (i = 0; i < mbuf_data_size_n; i++) {
 			mpx = mbuf_pool_find(socket_id, i);
@@ -2709,14 +2716,23 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 		}
 		rx_conf->rx_mempools = rx_mempool;
 		rx_conf->rx_nmempool = mbuf_data_size_n;
-	}
-	ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
+		rx_conf->rx_seg = NULL;
+		rx_conf->rx_nseg = 0;
+		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
 				    socket_id, rx_conf, NULL);
-	rx_conf->rx_seg = NULL;
-	rx_conf->rx_nseg = 0;
-	rx_conf->rx_mempools = NULL;
-	rx_conf->rx_nmempool = 0;
-exit:
+		rx_conf->rx_mempools = NULL;
+		rx_conf->rx_nmempool = 0;
+	} else {
+		/* Single pool/segment configuration */
+		rx_conf->rx_seg = NULL;
+		rx_conf->rx_nseg = 0;
+		rx_conf->rx_mempools = NULL;
+		rx_conf->rx_nmempool = 0;
+		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
+				    socket_id, rx_conf, mp);
+	}
+
+
 	ports[port_id].rxq[rx_queue_id].state = rx_conf->rx_deferred_start ?
 						RTE_ETH_QUEUE_STATE_STOPPED :
 						RTE_ETH_QUEUE_STATE_STARTED;
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index aaf69c349a..9472a2cb19 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -464,6 +464,7 @@ enum dcb_mode_enable
 extern uint8_t xstats_hide_zero; /**< Hide zero values for xstats display */
 
 /* globals used for configuration */
+extern uint8_t multi_mempool; /**< Enables multi-mempool feature. */
 extern uint8_t record_core_cycles; /**< Enables measurement of CPU cycles */
 extern uint8_t record_burst_stats; /**< Enables display of RX and TX bursts */
 extern uint16_t verbose_level; /**< Drives messages being displayed, if any. */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH v5 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-21 12:45       ` [PATCH v5 " Hanumanth Pothula
@ 2022-11-21 13:22         ` Ferruh Yigit
  2022-11-21 13:36           ` [EXT] " Hanumanth Reddy Pothula
  2022-11-21 14:33         ` [PATCH v6 " Hanumanth Pothula
  1 sibling, 1 reply; 27+ messages in thread
From: Ferruh Yigit @ 2022-11-21 13:22 UTC (permalink / raw)
  To: Hanumanth Pothula, Aman Singh, Yuying Zhang
  Cc: dev, andrew.rybchenko, thomas, yux.jiang, jerinj, ndabilpuram

On 11/21/2022 12:45 PM, Hanumanth Pothula wrote:
> Validate ethdev parameter 'max_rx_mempools' to know whether
> device supports multi-mempool feature or not.
> 
> Also, add new testpmd command line argument, multi-mempool,
> to control multi-mempool feature. By default its disabled.
> 
> Bugzilla ID: 1128
> Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue")
> 
> Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
> 
> ---
> v5:
>  - Added testpmd argument to enable multi-mempool feature.
>  - Simplified logic to distinguish between multi-mempool,
>    multi-segment and single pool/segment.
> v4:
>  - updated if condition.
> v3:
>  - Simplified conditional check.
>  - Corrected spell, whether.
> v2:
>  - Rebased on tip of next-net/main.
> ---
>  app/test-pmd/parameters.c |  3 ++
>  app/test-pmd/testpmd.c    | 58 +++++++++++++++++++++++++--------------
>  app/test-pmd/testpmd.h    |  1 +
>  3 files changed, 41 insertions(+), 21 deletions(-)
> 
> diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
> index aed4cdcb84..26d6450db4 100644
> --- a/app/test-pmd/parameters.c
> +++ b/app/test-pmd/parameters.c
> @@ -700,6 +700,7 @@ launch_args_parse(int argc, char** argv)
>  		{ "rx-mq-mode",                 1, 0, 0 },
>  		{ "record-core-cycles",         0, 0, 0 },
>  		{ "record-burst-stats",         0, 0, 0 },
> +		{ "multi-mempool",              0, 0, 0 },

Can you please group with relatet paramters, instead of appending end,
after 'rxpkts' related parameters group (so after 'txpkts') can be good
location since it is used for buffer split.

need to document new argument on 'doc/guides/testpmd_app_ug/run_app.rst'

Also need to add help string in 'usage()' function, again grouped in
related paramters.

>  		{ PARAM_NUM_PROCS,              1, 0, 0 },
>  		{ PARAM_PROC_ID,                1, 0, 0 },
>  		{ 0, 0, 0, 0 },
> @@ -1449,6 +1450,8 @@ launch_args_parse(int argc, char** argv)
>  				record_core_cycles = 1;
>  			if (!strcmp(lgopts[opt_idx].name, "record-burst-stats"))
>  				record_burst_stats = 1;
> +			if (!strcmp(lgopts[opt_idx].name, "multi-mempool"))
> +				multi_mempool = 1;

Can you group with related paramters, same as above mentioned location?

>  			if (!strcmp(lgopts[opt_idx].name, PARAM_NUM_PROCS))
>  				num_procs = atoi(optarg);
>  			if (!strcmp(lgopts[opt_idx].name, PARAM_PROC_ID))
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 4e25f77c6a..9dfc4c9d0e 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -497,6 +497,11 @@ uint8_t record_burst_stats;
>   */
>  uint32_t rxq_share;
>  
> +/*
> + * Multi-mempool support, disabled by default.
> + */
> +uint8_t multi_mempool;

Can you put this after 'rx_pkt_nb_segs' related group.

> +
>  unsigned int num_sockets = 0;
>  unsigned int socket_ids[RTE_MAX_NUMA_NODES];
>  
> @@ -2655,28 +2660,23 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
>  	union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
>  	struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
>  	struct rte_mempool *mpx;
> +	struct rte_eth_dev_info dev_info;
>  	unsigned int i, mp_n;
>  	uint32_t prev_hdrs = 0;
>  	int ret;
>  
> +	ret = rte_eth_dev_info_get(port_id, &dev_info);
> +	if (ret != 0)
> +		return ret;
> +
>  	/* Verify Rx queue configuration is single pool and segment or
>  	 * multiple pool/segment.
> +	 * @see rte_eth_dev_info::max_rx_mempools
>  	 * @see rte_eth_rxconf::rx_mempools
>  	 * @see rte_eth_rxconf::rx_seg
>  	 */

Is above comment block still valid?

> -	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> -	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
> -		/* Single pool/segment configuration */
> -		rx_conf->rx_seg = NULL;
> -		rx_conf->rx_nseg = 0;
> -		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
> -					     nb_rx_desc, socket_id,
> -					     rx_conf, mp);
> -		goto exit;
> -	}
> -
> -	if (rx_pkt_nb_segs > 1 ||
> -	    rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
> +	if ((rx_pkt_nb_segs > 1) &&
> +	    (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
>  		/* multi-segment configuration */
>  		for (i = 0; i < rx_pkt_nb_segs; i++) {
>  			struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split;
> @@ -2701,7 +2701,14 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
>  		}
>  		rx_conf->rx_nseg = rx_pkt_nb_segs;
>  		rx_conf->rx_seg = rx_useg;
> -	} else {
> +		rx_conf->rx_mempools = NULL;
> +		rx_conf->rx_nmempool = 0;
> +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
> +				    socket_id, rx_conf, NULL);
> +		rx_conf->rx_seg = NULL;
> +		rx_conf->rx_nseg = 0;
> +	} else if ((multi_mempool == 1) && (dev_info.max_rx_mempools != 0) &&
> +		  (mbuf_data_size_n > 1)) {

What do you think to move 'rte_eth_dev_info_get()' within this if block,
and reduce 'dev_info' scope, like

else if (multi_mempool == 1)
	if (mbuf_data_size_n <= 1))
		log(describe problem)
		return
	struct rte_eth_dev_info dev_info;
	ret = rte_eth_dev_info_get(port_id, &dev_info);
	if (dev_info.max_rx_mempools == 0)
		log("device doesn't support requested config"
		return
	<multi-pool configuration>
else

>  		/* multi-pool configuration */
>  		for (i = 0; i < mbuf_data_size_n; i++) {
>  			mpx = mbuf_pool_find(socket_id, i);

Where the mempools are created? Is that code also needs to be updated to
use/check 'multi_mempool' variable/config?

> @@ -2709,14 +2716,23 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
>  		}
>  		rx_conf->rx_mempools = rx_mempool;
>  		rx_conf->rx_nmempool = mbuf_data_size_n;
> -	}
> -	ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
> +		rx_conf->rx_seg = NULL;
> +		rx_conf->rx_nseg = 0;
> +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
>  				    socket_id, rx_conf, NULL);
> -	rx_conf->rx_seg = NULL;
> -	rx_conf->rx_nseg = 0;
> -	rx_conf->rx_mempools = NULL;
> -	rx_conf->rx_nmempool = 0;
> -exit:
> +		rx_conf->rx_mempools = NULL;
> +		rx_conf->rx_nmempool = 0;
> +	} else {
> +		/* Single pool/segment configuration */
> +		rx_conf->rx_seg = NULL;
> +		rx_conf->rx_nseg = 0;
> +		rx_conf->rx_mempools = NULL;
> +		rx_conf->rx_nmempool = 0;
> +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
> +				    socket_id, rx_conf, mp);
> +	}
> +
> +
>  	ports[port_id].rxq[rx_queue_id].state = rx_conf->rx_deferred_start ?
>  						RTE_ETH_QUEUE_STATE_STOPPED :
>  						RTE_ETH_QUEUE_STATE_STARTED;
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index aaf69c349a..9472a2cb19 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -464,6 +464,7 @@ enum dcb_mode_enable
>  extern uint8_t xstats_hide_zero; /**< Hide zero values for xstats display */
>  
>  /* globals used for configuration */
> +extern uint8_t multi_mempool; /**< Enables multi-mempool feature. */

Again please group this same location as done in .c file

>  extern uint8_t record_core_cycles; /**< Enables measurement of CPU cycles */
>  extern uint8_t record_burst_stats; /**< Enables display of RX and TX bursts */
>  extern uint16_t verbose_level; /**< Drives messages being displayed, if any. */


^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [EXT] Re: [PATCH v5 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-21 13:22         ` Ferruh Yigit
@ 2022-11-21 13:36           ` Hanumanth Reddy Pothula
  2022-11-21 14:10             ` Ferruh Yigit
  0 siblings, 1 reply; 27+ messages in thread
From: Hanumanth Reddy Pothula @ 2022-11-21 13:36 UTC (permalink / raw)
  To: Ferruh Yigit, Aman Singh, Yuying Zhang
  Cc: dev, andrew.rybchenko, thomas, yux.jiang,
	Jerin Jacob Kollanukkaran, Nithin Kumar Dabilpuram



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Monday, November 21, 2022 6:53 PM
> To: Hanumanth Reddy Pothula <hpothula@marvell.com>; Aman Singh
> <aman.deep.singh@intel.com>; Yuying Zhang <yuying.zhang@intel.com>
> Cc: dev@dpdk.org; andrew.rybchenko@oktetlabs.ru;
> thomas@monjalon.net; yux.jiang@intel.com; Jerin Jacob Kollanukkaran
> <jerinj@marvell.com>; Nithin Kumar Dabilpuram
> <ndabilpuram@marvell.com>
> Subject: [EXT] Re: [PATCH v5 1/1] app/testpmd: add valid check to verify
> multi mempool feature
> 
> External Email
> 
> ----------------------------------------------------------------------
> On 11/21/2022 12:45 PM, Hanumanth Pothula wrote:
> > Validate ethdev parameter 'max_rx_mempools' to know whether device
> > supports multi-mempool feature or not.
> >
> > Also, add new testpmd command line argument, multi-mempool, to
> control
> > multi-mempool feature. By default its disabled.
> >
> > Bugzilla ID: 1128
> > Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx
> > queue")
> >
> > Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
> >
> > ---
> > v5:
> >  - Added testpmd argument to enable multi-mempool feature.
> >  - Simplified logic to distinguish between multi-mempool,
> >    multi-segment and single pool/segment.
> > v4:
> >  - updated if condition.
> > v3:
> >  - Simplified conditional check.
> >  - Corrected spell, whether.
> > v2:
> >  - Rebased on tip of next-net/main.
> > ---
> >  app/test-pmd/parameters.c |  3 ++
> >  app/test-pmd/testpmd.c    | 58 +++++++++++++++++++++++++------------
> --
> >  app/test-pmd/testpmd.h    |  1 +
> >  3 files changed, 41 insertions(+), 21 deletions(-)
> >
> > diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
> > index aed4cdcb84..26d6450db4 100644
> > --- a/app/test-pmd/parameters.c
> > +++ b/app/test-pmd/parameters.c
> > @@ -700,6 +700,7 @@ launch_args_parse(int argc, char** argv)
> >  		{ "rx-mq-mode",                 1, 0, 0 },
> >  		{ "record-core-cycles",         0, 0, 0 },
> >  		{ "record-burst-stats",         0, 0, 0 },
> > +		{ "multi-mempool",              0, 0, 0 },
> 
> Can you please group with relatet paramters, instead of appending end,
> after 'rxpkts' related parameters group (so after 'txpkts') can be good
> location since it is used for buffer split.
> 
Ack

> need to document new argument on
> 'doc/guides/testpmd_app_ug/run_app.rst'
>
Ack
 
> Also need to add help string in 'usage()' function, again grouped in related
> paramters.
Sure, will add help string
> 
> >  		{ PARAM_NUM_PROCS,              1, 0, 0 },
> >  		{ PARAM_PROC_ID,                1, 0, 0 },
> >  		{ 0, 0, 0, 0 },
> > @@ -1449,6 +1450,8 @@ launch_args_parse(int argc, char** argv)
> >  				record_core_cycles = 1;
> >  			if (!strcmp(lgopts[opt_idx].name, "record-burst-
> stats"))
> >  				record_burst_stats = 1;
> > +			if (!strcmp(lgopts[opt_idx].name, "multi-
> mempool"))
> > +				multi_mempool = 1;
> 
> Can you group with related paramters, same as above mentioned location?
> 
Ack
> >  			if (!strcmp(lgopts[opt_idx].name,
> PARAM_NUM_PROCS))
> >  				num_procs = atoi(optarg);
> >  			if (!strcmp(lgopts[opt_idx].name,
> PARAM_PROC_ID)) diff --git
> > a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > 4e25f77c6a..9dfc4c9d0e 100644
> > --- a/app/test-pmd/testpmd.c
> > +++ b/app/test-pmd/testpmd.c
> > @@ -497,6 +497,11 @@ uint8_t record_burst_stats;
> >   */
> >  uint32_t rxq_share;
> >
> > +/*
> > + * Multi-mempool support, disabled by default.
> > + */
> > +uint8_t multi_mempool;
> 
> Can you put this after 'rx_pkt_nb_segs' related group.
> 
Ack
> > +
> >  unsigned int num_sockets = 0;
> >  unsigned int socket_ids[RTE_MAX_NUMA_NODES];
> >
> > @@ -2655,28 +2660,23 @@ rx_queue_setup(uint16_t port_id, uint16_t
> rx_queue_id,
> >  	union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
> >  	struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
> >  	struct rte_mempool *mpx;
> > +	struct rte_eth_dev_info dev_info;
> >  	unsigned int i, mp_n;
> >  	uint32_t prev_hdrs = 0;
> >  	int ret;
> >
> > +	ret = rte_eth_dev_info_get(port_id, &dev_info);
> > +	if (ret != 0)
> > +		return ret;
> > +
> >  	/* Verify Rx queue configuration is single pool and segment or
> >  	 * multiple pool/segment.
> > +	 * @see rte_eth_dev_info::max_rx_mempools
> >  	 * @see rte_eth_rxconf::rx_mempools
> >  	 * @see rte_eth_rxconf::rx_seg
> >  	 */
> 
> Is above comment block still valid?
Will remove
> 
> > -	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> > -	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) !=
> 0))) {
> > -		/* Single pool/segment configuration */
> > -		rx_conf->rx_seg = NULL;
> > -		rx_conf->rx_nseg = 0;
> > -		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
> > -					     nb_rx_desc, socket_id,
> > -					     rx_conf, mp);
> > -		goto exit;
> > -	}
> > -
> > -	if (rx_pkt_nb_segs > 1 ||
> > -	    rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
> > +	if ((rx_pkt_nb_segs > 1) &&
> > +	    (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
> >  		/* multi-segment configuration */
> >  		for (i = 0; i < rx_pkt_nb_segs; i++) {
> >  			struct rte_eth_rxseg_split *rx_seg =
> &rx_useg[i].split; @@ -2701,7
> > +2701,14 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
> >  		}
> >  		rx_conf->rx_nseg = rx_pkt_nb_segs;
> >  		rx_conf->rx_seg = rx_useg;
> > -	} else {
> > +		rx_conf->rx_mempools = NULL;
> > +		rx_conf->rx_nmempool = 0;
> > +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
> nb_rx_desc,
> > +				    socket_id, rx_conf, NULL);
> > +		rx_conf->rx_seg = NULL;
> > +		rx_conf->rx_nseg = 0;
> > +	} else if ((multi_mempool == 1) && (dev_info.max_rx_mempools !=
> 0) &&
> > +		  (mbuf_data_size_n > 1)) {
> 
> What do you think to move 'rte_eth_dev_info_get()' within this if block,
> and reduce 'dev_info' scope, like
Ack
> 
> else if (multi_mempool == 1)
> 	if (mbuf_data_size_n <= 1))
> 		log(describe problem)
> 		return
> 	struct rte_eth_dev_info dev_info;
> 	ret = rte_eth_dev_info_get(port_id, &dev_info);
> 	if (dev_info.max_rx_mempools == 0)
> 		log("device doesn't support requested config"
> 		return
> 	<multi-pool configuration>
> else
> 
> >  		/* multi-pool configuration */
> >  		for (i = 0; i < mbuf_data_size_n; i++) {
> >  			mpx = mbuf_pool_find(socket_id, i);
> 
> Where the mempools are created? Is that code also needs to be updated to
> use/check 'multi_mempool' variable/config?
I think it's not required, as user might create  multiple pools for other scenarios as well, for example as part of buzilla id: 1128, user creating two pools but not for multi-mempool feature.
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5,6 -n 8 --force-max-simd-bitwidth=64 -- -i --portmask=0x3 --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 --mbuf-size=2048,2048
> 
> > @@ -2709,14 +2716,23 @@ rx_queue_setup(uint16_t port_id, uint16_t
> rx_queue_id,
> >  		}
> >  		rx_conf->rx_mempools = rx_mempool;
> >  		rx_conf->rx_nmempool = mbuf_data_size_n;
> > -	}
> > -	ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
> > +		rx_conf->rx_seg = NULL;
> > +		rx_conf->rx_nseg = 0;
> > +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
> nb_rx_desc,
> >  				    socket_id, rx_conf, NULL);
> > -	rx_conf->rx_seg = NULL;
> > -	rx_conf->rx_nseg = 0;
> > -	rx_conf->rx_mempools = NULL;
> > -	rx_conf->rx_nmempool = 0;
> > -exit:
> > +		rx_conf->rx_mempools = NULL;
> > +		rx_conf->rx_nmempool = 0;
> > +	} else {
> > +		/* Single pool/segment configuration */
> > +		rx_conf->rx_seg = NULL;
> > +		rx_conf->rx_nseg = 0;
> > +		rx_conf->rx_mempools = NULL;
> > +		rx_conf->rx_nmempool = 0;
> > +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
> nb_rx_desc,
> > +				    socket_id, rx_conf, mp);
> > +	}
> > +
> > +
> >  	ports[port_id].rxq[rx_queue_id].state = rx_conf->rx_deferred_start
> ?
> >
> 	RTE_ETH_QUEUE_STATE_STOPPED :
> >
> 	RTE_ETH_QUEUE_STATE_STARTED;
> > diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index
> > aaf69c349a..9472a2cb19 100644
> > --- a/app/test-pmd/testpmd.h
> > +++ b/app/test-pmd/testpmd.h
> > @@ -464,6 +464,7 @@ enum dcb_mode_enable  extern uint8_t
> > xstats_hide_zero; /**< Hide zero values for xstats display */
> >
> >  /* globals used for configuration */
> > +extern uint8_t multi_mempool; /**< Enables multi-mempool feature.
> */
> 
> Again please group this same location as done in .c file
Ack.
> 
> >  extern uint8_t record_core_cycles; /**< Enables measurement of CPU
> > cycles */  extern uint8_t record_burst_stats; /**< Enables display of
> > RX and TX bursts */  extern uint16_t verbose_level; /**< Drives messages
> being displayed, if any. */


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [EXT] Re: [PATCH v5 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-21 13:36           ` [EXT] " Hanumanth Reddy Pothula
@ 2022-11-21 14:10             ` Ferruh Yigit
  0 siblings, 0 replies; 27+ messages in thread
From: Ferruh Yigit @ 2022-11-21 14:10 UTC (permalink / raw)
  To: Hanumanth Reddy Pothula, Aman Singh, Yuying Zhang
  Cc: dev, andrew.rybchenko, thomas, yux.jiang,
	Jerin Jacob Kollanukkaran, Nithin Kumar Dabilpuram

On 11/21/2022 1:36 PM, Hanumanth Reddy Pothula wrote:
> 
> 
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@amd.com>
>> Sent: Monday, November 21, 2022 6:53 PM
>> To: Hanumanth Reddy Pothula <hpothula@marvell.com>; Aman Singh
>> <aman.deep.singh@intel.com>; Yuying Zhang <yuying.zhang@intel.com>
>> Cc: dev@dpdk.org; andrew.rybchenko@oktetlabs.ru;
>> thomas@monjalon.net; yux.jiang@intel.com; Jerin Jacob Kollanukkaran
>> <jerinj@marvell.com>; Nithin Kumar Dabilpuram
>> <ndabilpuram@marvell.com>
>> Subject: [EXT] Re: [PATCH v5 1/1] app/testpmd: add valid check to verify
>> multi mempool feature
>>
>> External Email
>>
>> ----------------------------------------------------------------------
>> On 11/21/2022 12:45 PM, Hanumanth Pothula wrote:
>>> Validate ethdev parameter 'max_rx_mempools' to know whether device
>>> supports multi-mempool feature or not.
>>>
>>> Also, add new testpmd command line argument, multi-mempool, to
>> control
>>> multi-mempool feature. By default its disabled.
>>>
>>> Bugzilla ID: 1128
>>> Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx
>>> queue")
>>>
>>> Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
>>>
>>> ---
>>> v5:
>>>  - Added testpmd argument to enable multi-mempool feature.
>>>  - Simplified logic to distinguish between multi-mempool,
>>>    multi-segment and single pool/segment.
>>> v4:
>>>  - updated if condition.
>>> v3:
>>>  - Simplified conditional check.
>>>  - Corrected spell, whether.
>>> v2:
>>>  - Rebased on tip of next-net/main.
>>> ---
>>>  app/test-pmd/parameters.c |  3 ++
>>>  app/test-pmd/testpmd.c    | 58 +++++++++++++++++++++++++------------
>> --
>>>  app/test-pmd/testpmd.h    |  1 +
>>>  3 files changed, 41 insertions(+), 21 deletions(-)
>>>
>>> diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
>>> index aed4cdcb84..26d6450db4 100644
>>> --- a/app/test-pmd/parameters.c
>>> +++ b/app/test-pmd/parameters.c
>>> @@ -700,6 +700,7 @@ launch_args_parse(int argc, char** argv)
>>>  		{ "rx-mq-mode",                 1, 0, 0 },
>>>  		{ "record-core-cycles",         0, 0, 0 },
>>>  		{ "record-burst-stats",         0, 0, 0 },
>>> +		{ "multi-mempool",              0, 0, 0 },
>>
>> Can you please group with relatet paramters, instead of appending end,
>> after 'rxpkts' related parameters group (so after 'txpkts') can be good
>> location since it is used for buffer split.
>>
> Ack
> 
>> need to document new argument on
>> 'doc/guides/testpmd_app_ug/run_app.rst'
>>
> Ack
>  
>> Also need to add help string in 'usage()' function, again grouped in related
>> paramters.
> Sure, will add help string
>>
>>>  		{ PARAM_NUM_PROCS,              1, 0, 0 },
>>>  		{ PARAM_PROC_ID,                1, 0, 0 },
>>>  		{ 0, 0, 0, 0 },
>>> @@ -1449,6 +1450,8 @@ launch_args_parse(int argc, char** argv)
>>>  				record_core_cycles = 1;
>>>  			if (!strcmp(lgopts[opt_idx].name, "record-burst-
>> stats"))
>>>  				record_burst_stats = 1;
>>> +			if (!strcmp(lgopts[opt_idx].name, "multi-
>> mempool"))
>>> +				multi_mempool = 1;
>>
>> Can you group with related paramters, same as above mentioned location?
>>
> Ack
>>>  			if (!strcmp(lgopts[opt_idx].name,
>> PARAM_NUM_PROCS))
>>>  				num_procs = atoi(optarg);
>>>  			if (!strcmp(lgopts[opt_idx].name,
>> PARAM_PROC_ID)) diff --git
>>> a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
>>> 4e25f77c6a..9dfc4c9d0e 100644
>>> --- a/app/test-pmd/testpmd.c
>>> +++ b/app/test-pmd/testpmd.c
>>> @@ -497,6 +497,11 @@ uint8_t record_burst_stats;
>>>   */
>>>  uint32_t rxq_share;
>>>
>>> +/*
>>> + * Multi-mempool support, disabled by default.
>>> + */
>>> +uint8_t multi_mempool;
>>
>> Can you put this after 'rx_pkt_nb_segs' related group.
>>
> Ack
>>> +
>>>  unsigned int num_sockets = 0;
>>>  unsigned int socket_ids[RTE_MAX_NUMA_NODES];
>>>
>>> @@ -2655,28 +2660,23 @@ rx_queue_setup(uint16_t port_id, uint16_t
>> rx_queue_id,
>>>  	union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
>>>  	struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
>>>  	struct rte_mempool *mpx;
>>> +	struct rte_eth_dev_info dev_info;
>>>  	unsigned int i, mp_n;
>>>  	uint32_t prev_hdrs = 0;
>>>  	int ret;
>>>
>>> +	ret = rte_eth_dev_info_get(port_id, &dev_info);
>>> +	if (ret != 0)
>>> +		return ret;
>>> +
>>>  	/* Verify Rx queue configuration is single pool and segment or
>>>  	 * multiple pool/segment.
>>> +	 * @see rte_eth_dev_info::max_rx_mempools
>>>  	 * @see rte_eth_rxconf::rx_mempools
>>>  	 * @see rte_eth_rxconf::rx_seg
>>>  	 */
>>
>> Is above comment block still valid?
> Will remove
>>
>>> -	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
>>> -	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) !=
>> 0))) {
>>> -		/* Single pool/segment configuration */
>>> -		rx_conf->rx_seg = NULL;
>>> -		rx_conf->rx_nseg = 0;
>>> -		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
>>> -					     nb_rx_desc, socket_id,
>>> -					     rx_conf, mp);
>>> -		goto exit;
>>> -	}
>>> -
>>> -	if (rx_pkt_nb_segs > 1 ||
>>> -	    rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
>>> +	if ((rx_pkt_nb_segs > 1) &&
>>> +	    (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
>>>  		/* multi-segment configuration */
>>>  		for (i = 0; i < rx_pkt_nb_segs; i++) {
>>>  			struct rte_eth_rxseg_split *rx_seg =
>> &rx_useg[i].split; @@ -2701,7
>>> +2701,14 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
>>>  		}
>>>  		rx_conf->rx_nseg = rx_pkt_nb_segs;
>>>  		rx_conf->rx_seg = rx_useg;
>>> -	} else {
>>> +		rx_conf->rx_mempools = NULL;
>>> +		rx_conf->rx_nmempool = 0;
>>> +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
>> nb_rx_desc,
>>> +				    socket_id, rx_conf, NULL);
>>> +		rx_conf->rx_seg = NULL;
>>> +		rx_conf->rx_nseg = 0;
>>> +	} else if ((multi_mempool == 1) && (dev_info.max_rx_mempools !=
>> 0) &&
>>> +		  (mbuf_data_size_n > 1)) {
>>
>> What do you think to move 'rte_eth_dev_info_get()' within this if block,
>> and reduce 'dev_info' scope, like
> Ack
>>
>> else if (multi_mempool == 1)
>> 	if (mbuf_data_size_n <= 1))
>> 		log(describe problem)
>> 		return
>> 	struct rte_eth_dev_info dev_info;
>> 	ret = rte_eth_dev_info_get(port_id, &dev_info);
>> 	if (dev_info.max_rx_mempools == 0)
>> 		log("device doesn't support requested config"
>> 		return
>> 	<multi-pool configuration>
>> else
>>
>>>  		/* multi-pool configuration */
>>>  		for (i = 0; i < mbuf_data_size_n; i++) {
>>>  			mpx = mbuf_pool_find(socket_id, i);
>>
>> Where the mempools are created? Is that code also needs to be updated to
>> use/check 'multi_mempool' variable/config?
> I think it's not required, as user might create  multiple pools for other scenarios as well, for example as part of buzilla id: 1128, user creating two pools but not for multi-mempool feature.
> ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5,6 -n 8 --force-max-simd-bitwidth=64 -- -i --portmask=0x3 --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 --mbuf-size=2048,2048

If they are not created explicit for multiple pool, agree to not change
that code, thanks.

>>
>>> @@ -2709,14 +2716,23 @@ rx_queue_setup(uint16_t port_id, uint16_t
>> rx_queue_id,
>>>  		}
>>>  		rx_conf->rx_mempools = rx_mempool;
>>>  		rx_conf->rx_nmempool = mbuf_data_size_n;
>>> -	}
>>> -	ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
>>> +		rx_conf->rx_seg = NULL;
>>> +		rx_conf->rx_nseg = 0;
>>> +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
>> nb_rx_desc,
>>>  				    socket_id, rx_conf, NULL);
>>> -	rx_conf->rx_seg = NULL;
>>> -	rx_conf->rx_nseg = 0;
>>> -	rx_conf->rx_mempools = NULL;
>>> -	rx_conf->rx_nmempool = 0;
>>> -exit:
>>> +		rx_conf->rx_mempools = NULL;
>>> +		rx_conf->rx_nmempool = 0;
>>> +	} else {
>>> +		/* Single pool/segment configuration */
>>> +		rx_conf->rx_seg = NULL;
>>> +		rx_conf->rx_nseg = 0;
>>> +		rx_conf->rx_mempools = NULL;
>>> +		rx_conf->rx_nmempool = 0;
>>> +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
>> nb_rx_desc,
>>> +				    socket_id, rx_conf, mp);
>>> +	}
>>> +
>>> +
>>>  	ports[port_id].rxq[rx_queue_id].state = rx_conf->rx_deferred_start
>> ?
>>>
>> 	RTE_ETH_QUEUE_STATE_STOPPED :
>>>
>> 	RTE_ETH_QUEUE_STATE_STARTED;
>>> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index
>>> aaf69c349a..9472a2cb19 100644
>>> --- a/app/test-pmd/testpmd.h
>>> +++ b/app/test-pmd/testpmd.h
>>> @@ -464,6 +464,7 @@ enum dcb_mode_enable  extern uint8_t
>>> xstats_hide_zero; /**< Hide zero values for xstats display */
>>>
>>>  /* globals used for configuration */
>>> +extern uint8_t multi_mempool; /**< Enables multi-mempool feature.
>> */
>>
>> Again please group this same location as done in .c file
> Ack.
>>
>>>  extern uint8_t record_core_cycles; /**< Enables measurement of CPU
>>> cycles */  extern uint8_t record_burst_stats; /**< Enables display of
>>> RX and TX bursts */  extern uint16_t verbose_level; /**< Drives messages
>> being displayed, if any. */
> 


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v6 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-21 12:45       ` [PATCH v5 " Hanumanth Pothula
  2022-11-21 13:22         ` Ferruh Yigit
@ 2022-11-21 14:33         ` Hanumanth Pothula
  2022-11-21 17:31           ` Ferruh Yigit
  2022-11-21 18:07           ` [PATCH v7 " Hanumanth Pothula
  1 sibling, 2 replies; 27+ messages in thread
From: Hanumanth Pothula @ 2022-11-21 14:33 UTC (permalink / raw)
  To: Aman Singh, Yuying Zhang
  Cc: dev, andrew.rybchenko, thomas, yux.jiang, jerinj, ndabilpuram, hpothula

Validate ethdev parameter 'max_rx_mempools' to know whether
device supports multi-mempool feature or not.

Also, add new testpmd command line argument, multi-mempool,
to control multi-mempool feature. By default its disabled.

Bugzilla ID: 1128
Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue")

Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>

---
v6:
 - Updated run_app.rst file with multi-mempool argument.
 - defined and populated multi_mempool at related arguments.
 - invoking rte_eth_dev_info_get() withing multi-mempool condition
v5:
 - Added testpmd argument to enable multi-mempool feature.
 - Simplified logic to distinguish between multi-mempool,
   multi-segment and single pool/segment.
v4:
 - updated if condition.
v3:
 - Simplified conditional check.
 - Corrected spell, whether.
v2:
 - Rebased on tip of next-net/main.
---
 app/test-pmd/parameters.c             |  4 ++
 app/test-pmd/testpmd.c                | 66 +++++++++++++++++----------
 app/test-pmd/testpmd.h                |  1 +
 doc/guides/testpmd_app_ug/run_app.rst |  4 ++
 4 files changed, 50 insertions(+), 25 deletions(-)

diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index aed4cdcb84..d0f7b2f11d 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -155,6 +155,7 @@ usage(char* progname)
 	printf("  --rxhdrs=eth[,ipv4]*: set RX segment protocol to split.\n");
 	printf("  --txpkts=X[,Y]*: set TX segment sizes"
 		" or total packet length.\n");
+	printf(" --multi-mempool: enable multi-mempool support\n");
 	printf("  --txonly-multi-flow: generate multiple flows in txonly mode\n");
 	printf("  --tx-ip=src,dst: IP addresses in Tx-only mode\n");
 	printf("  --tx-udp=src[,dst]: UDP ports in Tx-only mode\n");
@@ -669,6 +670,7 @@ launch_args_parse(int argc, char** argv)
 		{ "rxpkts",			1, 0, 0 },
 		{ "rxhdrs",			1, 0, 0 },
 		{ "txpkts",			1, 0, 0 },
+		{ "multi-mempool",              0, 0, 0 },
 		{ "txonly-multi-flow",		0, 0, 0 },
 		{ "rxq-share",			2, 0, 0 },
 		{ "eth-link-speed",		1, 0, 0 },
@@ -1295,6 +1297,8 @@ launch_args_parse(int argc, char** argv)
 				else
 					rte_exit(EXIT_FAILURE, "bad txpkts\n");
 			}
+			if (!strcmp(lgopts[opt_idx].name, "multi-mempool"))
+				multi_mempool = 1;
 			if (!strcmp(lgopts[opt_idx].name, "txonly-multi-flow"))
 				txonly_multi_flow = 1;
 			if (!strcmp(lgopts[opt_idx].name, "rxq-share")) {
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 4e25f77c6a..0bf2e4bd0d 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -245,6 +245,7 @@ uint32_t max_rx_pkt_len;
  */
 uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
 uint8_t  rx_pkt_nb_segs; /**< Number of segments to split */
+uint8_t multi_mempool; /**< Enables multi-mempool feature */
 uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
 uint8_t  rx_pkt_nb_offs; /**< Number of specified offsets */
 uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT];
@@ -258,6 +259,8 @@ uint16_t tx_pkt_seg_lengths[RTE_MAX_SEGS_PER_PKT] = {
 };
 uint8_t  tx_pkt_nb_segs = 1; /**< Number of segments in TXONLY packets */
 
+
+
 enum tx_pkt_split tx_pkt_split = TX_PKT_SPLIT_OFF;
 /**< Split policy for packets to TX. */
 
@@ -2659,24 +2662,9 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 	uint32_t prev_hdrs = 0;
 	int ret;
 
-	/* Verify Rx queue configuration is single pool and segment or
-	 * multiple pool/segment.
-	 * @see rte_eth_rxconf::rx_mempools
-	 * @see rte_eth_rxconf::rx_seg
-	 */
-	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
-	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
-		/* Single pool/segment configuration */
-		rx_conf->rx_seg = NULL;
-		rx_conf->rx_nseg = 0;
-		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
-					     nb_rx_desc, socket_id,
-					     rx_conf, mp);
-		goto exit;
-	}
 
-	if (rx_pkt_nb_segs > 1 ||
-	    rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
+	if ((rx_pkt_nb_segs > 1) &&
+	    (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
 		/* multi-segment configuration */
 		for (i = 0; i < rx_pkt_nb_segs; i++) {
 			struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split;
@@ -2701,22 +2689,50 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 		}
 		rx_conf->rx_nseg = rx_pkt_nb_segs;
 		rx_conf->rx_seg = rx_useg;
-	} else {
+		rx_conf->rx_mempools = NULL;
+		rx_conf->rx_nmempool = 0;
+		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
+				    socket_id, rx_conf, NULL);
+		rx_conf->rx_seg = NULL;
+		rx_conf->rx_nseg = 0;
+	} else if (multi_mempool == 1) {
 		/* multi-pool configuration */
+		struct rte_eth_dev_info dev_info;
+
+		if (mbuf_data_size_n <= 1) {
+			RTE_LOG(ERR, EAL, "invalid number of mempools %u",
+				mbuf_data_size_n);
+			return -EINVAL;
+		}
+		ret = rte_eth_dev_info_get(port_id, &dev_info);
+		if (ret != 0)
+			return ret;
+		if (dev_info.max_rx_mempools == 0) {
+			RTE_LOG(ERR, EAL, "device doesn't support requested multi-mempool configuration");
+			return -ENOTSUP;
+		}
 		for (i = 0; i < mbuf_data_size_n; i++) {
 			mpx = mbuf_pool_find(socket_id, i);
 			rx_mempool[i] = mpx ? mpx : mp;
 		}
 		rx_conf->rx_mempools = rx_mempool;
 		rx_conf->rx_nmempool = mbuf_data_size_n;
-	}
-	ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
+		rx_conf->rx_seg = NULL;
+		rx_conf->rx_nseg = 0;
+		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
 				    socket_id, rx_conf, NULL);
-	rx_conf->rx_seg = NULL;
-	rx_conf->rx_nseg = 0;
-	rx_conf->rx_mempools = NULL;
-	rx_conf->rx_nmempool = 0;
-exit:
+		rx_conf->rx_mempools = NULL;
+		rx_conf->rx_nmempool = 0;
+	} else {
+		/* Single pool/segment configuration */
+		rx_conf->rx_seg = NULL;
+		rx_conf->rx_nseg = 0;
+		rx_conf->rx_mempools = NULL;
+		rx_conf->rx_nmempool = 0;
+		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
+				    socket_id, rx_conf, mp);
+	}
+
 	ports[port_id].rxq[rx_queue_id].state = rx_conf->rx_deferred_start ?
 						RTE_ETH_QUEUE_STATE_STOPPED :
 						RTE_ETH_QUEUE_STATE_STARTED;
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index aaf69c349a..e4f9b142c9 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -589,6 +589,7 @@ extern uint32_t max_rx_pkt_len;
 extern uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT];
 extern uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
 extern uint8_t  rx_pkt_nb_segs; /**< Number of segments to split */
+extern uint8_t multi_mempool; /**< Enables multi-mempool feature. */
 extern uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
 extern uint8_t  rx_pkt_nb_offs; /**< Number of specified offsets */
 
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index 610e442924..329570e721 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -365,6 +365,10 @@ The command line options are:
     Set TX segment sizes or total packet length. Valid for ``tx-only``
     and ``flowgen`` forwarding modes.
 
+* ``--multi-mempool``
+
+    Enable multi-mempool, multiple mbuf pools per Rx queue, support.
+
 *   ``--txonly-multi-flow``
 
     Generate multiple flows in txonly mode.
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH v6 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-21 14:33         ` [PATCH v6 " Hanumanth Pothula
@ 2022-11-21 17:31           ` Ferruh Yigit
  2022-11-21 17:45             ` [EXT] " Hanumanth Reddy Pothula
  2022-11-21 18:07           ` [PATCH v7 " Hanumanth Pothula
  1 sibling, 1 reply; 27+ messages in thread
From: Ferruh Yigit @ 2022-11-21 17:31 UTC (permalink / raw)
  To: Hanumanth Pothula, Aman Singh, Yuying Zhang
  Cc: dev, andrew.rybchenko, thomas, yux.jiang, jerinj, ndabilpuram

On 11/21/2022 2:33 PM, Hanumanth Pothula wrote:
> Validate ethdev parameter 'max_rx_mempools' to know whether
> device supports multi-mempool feature or not.
> 

Validation 'max_rx_mempools' is not main purpose of this patch, I would
move below paragraph up.

> Also, add new testpmd command line argument, multi-mempool,
> to control multi-mempool feature. By default its disabled.
> 
> Bugzilla ID: 1128
> Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue")
> 
> Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
> 
> ---
> v6:
>  - Updated run_app.rst file with multi-mempool argument.
>  - defined and populated multi_mempool at related arguments.
>  - invoking rte_eth_dev_info_get() withing multi-mempool condition
> v5:
>  - Added testpmd argument to enable multi-mempool feature.
>  - Simplified logic to distinguish between multi-mempool,
>    multi-segment and single pool/segment.
> v4:
>  - updated if condition.
> v3:
>  - Simplified conditional check.
>  - Corrected spell, whether.
> v2:
>  - Rebased on tip of next-net/main.
> ---
>  app/test-pmd/parameters.c             |  4 ++
>  app/test-pmd/testpmd.c                | 66 +++++++++++++++++----------
>  app/test-pmd/testpmd.h                |  1 +
>  doc/guides/testpmd_app_ug/run_app.rst |  4 ++
>  4 files changed, 50 insertions(+), 25 deletions(-)
> 
> diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
> index aed4cdcb84..d0f7b2f11d 100644
> --- a/app/test-pmd/parameters.c
> +++ b/app/test-pmd/parameters.c
> @@ -155,6 +155,7 @@ usage(char* progname)
>  	printf("  --rxhdrs=eth[,ipv4]*: set RX segment protocol to split.\n");
>  	printf("  --txpkts=X[,Y]*: set TX segment sizes"
>  		" or total packet length.\n");
> +	printf(" --multi-mempool: enable multi-mempool support\n");

Indentation is wrong, one space is missing.

Can you also update the '--mbuf-size=' definition, it has:
" ... extra memory pools will be created for allocating mbufs to receive
packets with buffer splitting features.",
Now it is for both "buffer splitting and multi Rx mempool features."
Even it can be possible to reference to new argument.

>  	printf("  --txonly-multi-flow: generate multiple flows in txonly mode\n");
>  	printf("  --tx-ip=src,dst: IP addresses in Tx-only mode\n");
>  	printf("  --tx-udp=src[,dst]: UDP ports in Tx-only mode\n");
> @@ -669,6 +670,7 @@ launch_args_parse(int argc, char** argv)
>  		{ "rxpkts",			1, 0, 0 },
>  		{ "rxhdrs",			1, 0, 0 },
>  		{ "txpkts",			1, 0, 0 },
> +		{ "multi-mempool",              0, 0, 0 },

Thinking twice, I am not sure about the 'multi-mempool' name, because
'mbuf-size' already cause to create multiple mempool, 'multi-mempool'
can be confusing.
As ethdev variable name is 'max_rx_mempools', what do you think to use
'multi-rx-mempools' here as argument?

>  		{ "txonly-multi-flow",		0, 0, 0 },
>  		{ "rxq-share",			2, 0, 0 },
>  		{ "eth-link-speed",		1, 0, 0 },
> @@ -1295,6 +1297,8 @@ launch_args_parse(int argc, char** argv)
>  				else
>  					rte_exit(EXIT_FAILURE, "bad txpkts\n");
>  			}
> +			if (!strcmp(lgopts[opt_idx].name, "multi-mempool"))
> +				multi_mempool = 1;
>  			if (!strcmp(lgopts[opt_idx].name, "txonly-multi-flow"))
>  				txonly_multi_flow = 1;
>  			if (!strcmp(lgopts[opt_idx].name, "rxq-share")) {
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 4e25f77c6a..0bf2e4bd0d 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -245,6 +245,7 @@ uint32_t max_rx_pkt_len;
>   */
>  uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
>  uint8_t  rx_pkt_nb_segs; /**< Number of segments to split */
> +uint8_t multi_mempool; /**< Enables multi-mempool feature */
>  uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
>  uint8_t  rx_pkt_nb_offs; /**< Number of specified offsets */
>  uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT];
> @@ -258,6 +259,8 @@ uint16_t tx_pkt_seg_lengths[RTE_MAX_SEGS_PER_PKT] = {
>  };
>  uint8_t  tx_pkt_nb_segs = 1; /**< Number of segments in TXONLY packets */
>  
> +
> +

Unintendend change.

>  enum tx_pkt_split tx_pkt_split = TX_PKT_SPLIT_OFF;
>  /**< Split policy for packets to TX. */
>  
> @@ -2659,24 +2662,9 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
>  	uint32_t prev_hdrs = 0;
>  	int ret;
>  
> -	/* Verify Rx queue configuration is single pool and segment or
> -	 * multiple pool/segment.
> -	 * @see rte_eth_rxconf::rx_mempools
> -	 * @see rte_eth_rxconf::rx_seg
> -	 */
> -	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> -	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
> -		/* Single pool/segment configuration */
> -		rx_conf->rx_seg = NULL;
> -		rx_conf->rx_nseg = 0;
> -		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
> -					     nb_rx_desc, socket_id,
> -					     rx_conf, mp);
> -		goto exit;
> -	}
>  
> -	if (rx_pkt_nb_segs > 1 ||
> -	    rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
> +	if ((rx_pkt_nb_segs > 1) &&
> +	    (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
>  		/* multi-segment configuration */
>  		for (i = 0; i < rx_pkt_nb_segs; i++) {
>  			struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split;
> @@ -2701,22 +2689,50 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
>  		}
>  		rx_conf->rx_nseg = rx_pkt_nb_segs;
>  		rx_conf->rx_seg = rx_useg;
> -	} else {
> +		rx_conf->rx_mempools = NULL;
> +		rx_conf->rx_nmempool = 0;
> +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
> +				    socket_id, rx_conf, NULL);
> +		rx_conf->rx_seg = NULL;
> +		rx_conf->rx_nseg = 0;
> +	} else if (multi_mempool == 1) {
>  		/* multi-pool configuration */
> +		struct rte_eth_dev_info dev_info;
> +
> +		if (mbuf_data_size_n <= 1) {
> +			RTE_LOG(ERR, EAL, "invalid number of mempools %u",
> +				mbuf_data_size_n);
> +			return -EINVAL;
> +		}
> +		ret = rte_eth_dev_info_get(port_id, &dev_info);
> +		if (ret != 0)
> +			return ret;
> +		if (dev_info.max_rx_mempools == 0) {
> +			RTE_LOG(ERR, EAL, "device doesn't support requested multi-mempool configuration");
> +			return -ENOTSUP;
> +		}
>  		for (i = 0; i < mbuf_data_size_n; i++) {
>  			mpx = mbuf_pool_find(socket_id, i);
>  			rx_mempool[i] = mpx ? mpx : mp;
>  		}
>  		rx_conf->rx_mempools = rx_mempool;
>  		rx_conf->rx_nmempool = mbuf_data_size_n;
> -	}
> -	ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
> +		rx_conf->rx_seg = NULL;
> +		rx_conf->rx_nseg = 0;
> +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
>  				    socket_id, rx_conf, NULL);
> -	rx_conf->rx_seg = NULL;
> -	rx_conf->rx_nseg = 0;
> -	rx_conf->rx_mempools = NULL;
> -	rx_conf->rx_nmempool = 0;
> -exit:
> +		rx_conf->rx_mempools = NULL;
> +		rx_conf->rx_nmempool = 0;
> +	} else {
> +		/* Single pool/segment configuration */
> +		rx_conf->rx_seg = NULL;
> +		rx_conf->rx_nseg = 0;
> +		rx_conf->rx_mempools = NULL;
> +		rx_conf->rx_nmempool = 0;
> +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
> +				    socket_id, rx_conf, mp);
> +	}
> +

Technically execution can reach to this point without taking any of the
braches above, in that case there should be an error here instead of
silently continue.

I think either there should be a check here, not sure how to do, or
single mempool can be the default setup out of the 'else' block. What do
you think?

>  	ports[port_id].rxq[rx_queue_id].state = rx_conf->rx_deferred_start ?
>  						RTE_ETH_QUEUE_STATE_STOPPED :
>  						RTE_ETH_QUEUE_STATE_STARTED;
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index aaf69c349a..e4f9b142c9 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -589,6 +589,7 @@ extern uint32_t max_rx_pkt_len;
>  extern uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT];
>  extern uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
>  extern uint8_t  rx_pkt_nb_segs; /**< Number of segments to split */
> +extern uint8_t multi_mempool; /**< Enables multi-mempool feature. */
>  extern uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
>  extern uint8_t  rx_pkt_nb_offs; /**< Number of specified offsets */
>  
> diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
> index 610e442924..329570e721 100644
> --- a/doc/guides/testpmd_app_ug/run_app.rst
> +++ b/doc/guides/testpmd_app_ug/run_app.rst
> @@ -365,6 +365,10 @@ The command line options are:
>      Set TX segment sizes or total packet length. Valid for ``tx-only``
>      and ``flowgen`` forwarding modes.
>  
> +* ``--multi-mempool``
> +
> +    Enable multi-mempool, multiple mbuf pools per Rx queue, support.
> +
>  *   ``--txonly-multi-flow``
>  
>      Generate multiple flows in txonly mode.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [EXT] Re: [PATCH v6 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-21 17:31           ` Ferruh Yigit
@ 2022-11-21 17:45             ` Hanumanth Reddy Pothula
  2022-11-21 18:05               ` Ferruh Yigit
  0 siblings, 1 reply; 27+ messages in thread
From: Hanumanth Reddy Pothula @ 2022-11-21 17:45 UTC (permalink / raw)
  To: Ferruh Yigit, Aman Singh, Yuying Zhang
  Cc: dev, andrew.rybchenko, thomas, yux.jiang,
	Jerin Jacob Kollanukkaran, Nithin Kumar Dabilpuram



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Monday, November 21, 2022 11:02 PM
> To: Hanumanth Reddy Pothula <hpothula@marvell.com>; Aman Singh
> <aman.deep.singh@intel.com>; Yuying Zhang <yuying.zhang@intel.com>
> Cc: dev@dpdk.org; andrew.rybchenko@oktetlabs.ru;
> thomas@monjalon.net; yux.jiang@intel.com; Jerin Jacob Kollanukkaran
> <jerinj@marvell.com>; Nithin Kumar Dabilpuram
> <ndabilpuram@marvell.com>
> Subject: [EXT] Re: [PATCH v6 1/1] app/testpmd: add valid check to verify
> multi mempool feature
> 
> External Email
> 
> ----------------------------------------------------------------------
> On 11/21/2022 2:33 PM, Hanumanth Pothula wrote:
> > Validate ethdev parameter 'max_rx_mempools' to know whether device
> > supports multi-mempool feature or not.
> >
> 
> Validation 'max_rx_mempools' is not main purpose of this patch, I would
> move below paragraph up.
> 
> > Also, add new testpmd command line argument, multi-mempool, to
> control
> > multi-mempool feature. By default its disabled.
> >
> > Bugzilla ID: 1128
> > Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx
> > queue")
> >
> > Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
> >
> > ---
> > v6:
> >  - Updated run_app.rst file with multi-mempool argument.
> >  - defined and populated multi_mempool at related arguments.
> >  - invoking rte_eth_dev_info_get() withing multi-mempool condition
> > v5:
> >  - Added testpmd argument to enable multi-mempool feature.
> >  - Simplified logic to distinguish between multi-mempool,
> >    multi-segment and single pool/segment.
> > v4:
> >  - updated if condition.
> > v3:
> >  - Simplified conditional check.
> >  - Corrected spell, whether.
> > v2:
> >  - Rebased on tip of next-net/main.
> > ---
> >  app/test-pmd/parameters.c             |  4 ++
> >  app/test-pmd/testpmd.c                | 66 +++++++++++++++++----------
> >  app/test-pmd/testpmd.h                |  1 +
> >  doc/guides/testpmd_app_ug/run_app.rst |  4 ++
> >  4 files changed, 50 insertions(+), 25 deletions(-)
> >
> > diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
> > index aed4cdcb84..d0f7b2f11d 100644
> > --- a/app/test-pmd/parameters.c
> > +++ b/app/test-pmd/parameters.c
> > @@ -155,6 +155,7 @@ usage(char* progname)
> >  	printf("  --rxhdrs=eth[,ipv4]*: set RX segment protocol to split.\n");
> >  	printf("  --txpkts=X[,Y]*: set TX segment sizes"
> >  		" or total packet length.\n");
> > +	printf(" --multi-mempool: enable multi-mempool support\n");
> 
> Indentation is wrong, one space is missing.
> 
> Can you also update the '--mbuf-size=' definition, it has:
> " ... extra memory pools will be created for allocating mbufs to receive
> packets with buffer splitting features.", Now it is for both "buffer splitting
> and multi Rx mempool features."
> Even it can be possible to reference to new argument.
Sure, will update. 
> 
> >  	printf("  --txonly-multi-flow: generate multiple flows in txonly
> mode\n");
> >  	printf("  --tx-ip=src,dst: IP addresses in Tx-only mode\n");
> >  	printf("  --tx-udp=src[,dst]: UDP ports in Tx-only mode\n"); @@
> > -669,6 +670,7 @@ launch_args_parse(int argc, char** argv)
> >  		{ "rxpkts",			1, 0, 0 },
> >  		{ "rxhdrs",			1, 0, 0 },
> >  		{ "txpkts",			1, 0, 0 },
> > +		{ "multi-mempool",              0, 0, 0 },
> 
> Thinking twice, I am not sure about the 'multi-mempool' name, because
> 'mbuf-size' already cause to create multiple mempool, 'multi-mempool'
> can be confusing.
> As ethdev variable name is 'max_rx_mempools', what do you think to use
> 'multi-rx-mempools' here as argument?

Yes, 'multi-rx-mempools' looks clean.

> 
> >  		{ "txonly-multi-flow",		0, 0, 0 },
> >  		{ "rxq-share",			2, 0, 0 },
> >  		{ "eth-link-speed",		1, 0, 0 },
> > @@ -1295,6 +1297,8 @@ launch_args_parse(int argc, char** argv)
> >  				else
> >  					rte_exit(EXIT_FAILURE, "bad
> txpkts\n");
> >  			}
> > +			if (!strcmp(lgopts[opt_idx].name, "multi-
> mempool"))
> > +				multi_mempool = 1;
> >  			if (!strcmp(lgopts[opt_idx].name, "txonly-multi-
> flow"))
> >  				txonly_multi_flow = 1;
> >  			if (!strcmp(lgopts[opt_idx].name, "rxq-share")) { diff
> --git
> > a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > 4e25f77c6a..0bf2e4bd0d 100644
> > --- a/app/test-pmd/testpmd.c
> > +++ b/app/test-pmd/testpmd.c
> > @@ -245,6 +245,7 @@ uint32_t max_rx_pkt_len;
> >   */
> >  uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
> >  uint8_t  rx_pkt_nb_segs; /**< Number of segments to split */
> > +uint8_t multi_mempool; /**< Enables multi-mempool feature */
> >  uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
> >  uint8_t  rx_pkt_nb_offs; /**< Number of specified offsets */
> > uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT];
> > @@ -258,6 +259,8 @@ uint16_t
> tx_pkt_seg_lengths[RTE_MAX_SEGS_PER_PKT]
> > = {  };  uint8_t  tx_pkt_nb_segs = 1; /**< Number of segments in
> > TXONLY packets */
> >
> > +
> > +
> 
> Unintendend change.

Ack
> 
> >  enum tx_pkt_split tx_pkt_split = TX_PKT_SPLIT_OFF;  /**< Split policy
> > for packets to TX. */
> >
> > @@ -2659,24 +2662,9 @@ rx_queue_setup(uint16_t port_id, uint16_t
> rx_queue_id,
> >  	uint32_t prev_hdrs = 0;
> >  	int ret;
> >
> > -	/* Verify Rx queue configuration is single pool and segment or
> > -	 * multiple pool/segment.
> > -	 * @see rte_eth_rxconf::rx_mempools
> > -	 * @see rte_eth_rxconf::rx_seg
> > -	 */
> > -	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> > -	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) !=
> 0))) {
> > -		/* Single pool/segment configuration */
> > -		rx_conf->rx_seg = NULL;
> > -		rx_conf->rx_nseg = 0;
> > -		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
> > -					     nb_rx_desc, socket_id,
> > -					     rx_conf, mp);
> > -		goto exit;
> > -	}
> >
> > -	if (rx_pkt_nb_segs > 1 ||
> > -	    rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
> > +	if ((rx_pkt_nb_segs > 1) &&
> > +	    (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
> >  		/* multi-segment configuration */
> >  		for (i = 0; i < rx_pkt_nb_segs; i++) {
> >  			struct rte_eth_rxseg_split *rx_seg =
> &rx_useg[i].split; @@
> > -2701,22 +2689,50 @@ rx_queue_setup(uint16_t port_id, uint16_t
> rx_queue_id,
> >  		}
> >  		rx_conf->rx_nseg = rx_pkt_nb_segs;
> >  		rx_conf->rx_seg = rx_useg;
> > -	} else {
> > +		rx_conf->rx_mempools = NULL;
> > +		rx_conf->rx_nmempool = 0;
> > +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
> nb_rx_desc,
> > +				    socket_id, rx_conf, NULL);
> > +		rx_conf->rx_seg = NULL;
> > +		rx_conf->rx_nseg = 0;
> > +	} else if (multi_mempool == 1) {
> >  		/* multi-pool configuration */
> > +		struct rte_eth_dev_info dev_info;
> > +
> > +		if (mbuf_data_size_n <= 1) {
> > +			RTE_LOG(ERR, EAL, "invalid number of mempools
> %u",
> > +				mbuf_data_size_n);
> > +			return -EINVAL;
> > +		}
> > +		ret = rte_eth_dev_info_get(port_id, &dev_info);
> > +		if (ret != 0)
> > +			return ret;
> > +		if (dev_info.max_rx_mempools == 0) {
> > +			RTE_LOG(ERR, EAL, "device doesn't support
> requested multi-mempool configuration");
> > +			return -ENOTSUP;
> > +		}
> >  		for (i = 0; i < mbuf_data_size_n; i++) {
> >  			mpx = mbuf_pool_find(socket_id, i);
> >  			rx_mempool[i] = mpx ? mpx : mp;
> >  		}
> >  		rx_conf->rx_mempools = rx_mempool;
> >  		rx_conf->rx_nmempool = mbuf_data_size_n;
> > -	}
> > -	ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
> > +		rx_conf->rx_seg = NULL;
> > +		rx_conf->rx_nseg = 0;
> > +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
> nb_rx_desc,
> >  				    socket_id, rx_conf, NULL);
> > -	rx_conf->rx_seg = NULL;
> > -	rx_conf->rx_nseg = 0;
> > -	rx_conf->rx_mempools = NULL;
> > -	rx_conf->rx_nmempool = 0;
> > -exit:
> > +		rx_conf->rx_mempools = NULL;
> > +		rx_conf->rx_nmempool = 0;
> > +	} else {
> > +		/* Single pool/segment configuration */
> > +		rx_conf->rx_seg = NULL;
> > +		rx_conf->rx_nseg = 0;
> > +		rx_conf->rx_mempools = NULL;
> > +		rx_conf->rx_nmempool = 0;
> > +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
> nb_rx_desc,
> > +				    socket_id, rx_conf, mp);
> > +	}
> > +
> 
> Technically execution can reach to this point without taking any of the
> braches above, in that case there should be an error here instead of silently
> continue.
> 
> I think either there should be a check here, not sure how to do, or single
> mempool can be the default setup out of the 'else' block. What do you
> think?
> 
Yes, default case(final else) is going to be single pool/segment. I think there is no need of error return.

This function(rx_queue_setup()) returns return of rte_eth_rx_queue_setup().
 
> >  	ports[port_id].rxq[rx_queue_id].state = rx_conf->rx_deferred_start
> ?
> >
> 	RTE_ETH_QUEUE_STATE_STOPPED :
> >
> 	RTE_ETH_QUEUE_STATE_STARTED;
> > diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index
> > aaf69c349a..e4f9b142c9 100644
> > --- a/app/test-pmd/testpmd.h
> > +++ b/app/test-pmd/testpmd.h
> > @@ -589,6 +589,7 @@ extern uint32_t max_rx_pkt_len;  extern
> uint32_t
> > rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT];
> >  extern uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
> >  extern uint8_t  rx_pkt_nb_segs; /**< Number of segments to split */
> > +extern uint8_t multi_mempool; /**< Enables multi-mempool feature.
> */
> >  extern uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
> >  extern uint8_t  rx_pkt_nb_offs; /**< Number of specified offsets */
> >
> > diff --git a/doc/guides/testpmd_app_ug/run_app.rst
> > b/doc/guides/testpmd_app_ug/run_app.rst
> > index 610e442924..329570e721 100644
> > --- a/doc/guides/testpmd_app_ug/run_app.rst
> > +++ b/doc/guides/testpmd_app_ug/run_app.rst
> > @@ -365,6 +365,10 @@ The command line options are:
> >      Set TX segment sizes or total packet length. Valid for ``tx-only``
> >      and ``flowgen`` forwarding modes.
> >
> > +* ``--multi-mempool``
> > +
> > +    Enable multi-mempool, multiple mbuf pools per Rx queue, support.
> > +
> >  *   ``--txonly-multi-flow``
> >
> >      Generate multiple flows in txonly mode.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [EXT] Re: [PATCH v6 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-21 17:45             ` [EXT] " Hanumanth Reddy Pothula
@ 2022-11-21 18:05               ` Ferruh Yigit
  0 siblings, 0 replies; 27+ messages in thread
From: Ferruh Yigit @ 2022-11-21 18:05 UTC (permalink / raw)
  To: Hanumanth Reddy Pothula, Aman Singh, Yuying Zhang
  Cc: dev, andrew.rybchenko, thomas, yux.jiang,
	Jerin Jacob Kollanukkaran, Nithin Kumar Dabilpuram

On 11/21/2022 5:45 PM, Hanumanth Reddy Pothula wrote:
> 
> 
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@amd.com>
>> Sent: Monday, November 21, 2022 11:02 PM
>> To: Hanumanth Reddy Pothula <hpothula@marvell.com>; Aman Singh
>> <aman.deep.singh@intel.com>; Yuying Zhang <yuying.zhang@intel.com>
>> Cc: dev@dpdk.org; andrew.rybchenko@oktetlabs.ru;
>> thomas@monjalon.net; yux.jiang@intel.com; Jerin Jacob Kollanukkaran
>> <jerinj@marvell.com>; Nithin Kumar Dabilpuram
>> <ndabilpuram@marvell.com>
>> Subject: [EXT] Re: [PATCH v6 1/1] app/testpmd: add valid check to verify
>> multi mempool feature
>>
>> External Email
>>
>> ----------------------------------------------------------------------
>> On 11/21/2022 2:33 PM, Hanumanth Pothula wrote:
>>> Validate ethdev parameter 'max_rx_mempools' to know whether device
>>> supports multi-mempool feature or not.
>>>
>>
>> Validation 'max_rx_mempools' is not main purpose of this patch, I would
>> move below paragraph up.
>>
>>> Also, add new testpmd command line argument, multi-mempool, to
>> control
>>> multi-mempool feature. By default its disabled.
>>>
>>> Bugzilla ID: 1128
>>> Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx
>>> queue")
>>>
>>> Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
>>>
>>> ---
>>> v6:
>>>  - Updated run_app.rst file with multi-mempool argument.
>>>  - defined and populated multi_mempool at related arguments.
>>>  - invoking rte_eth_dev_info_get() withing multi-mempool condition
>>> v5:
>>>  - Added testpmd argument to enable multi-mempool feature.
>>>  - Simplified logic to distinguish between multi-mempool,
>>>    multi-segment and single pool/segment.
>>> v4:
>>>  - updated if condition.
>>> v3:
>>>  - Simplified conditional check.
>>>  - Corrected spell, whether.
>>> v2:
>>>  - Rebased on tip of next-net/main.
>>> ---
>>>  app/test-pmd/parameters.c             |  4 ++
>>>  app/test-pmd/testpmd.c                | 66 +++++++++++++++++----------
>>>  app/test-pmd/testpmd.h                |  1 +
>>>  doc/guides/testpmd_app_ug/run_app.rst |  4 ++
>>>  4 files changed, 50 insertions(+), 25 deletions(-)
>>>
>>> diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
>>> index aed4cdcb84..d0f7b2f11d 100644
>>> --- a/app/test-pmd/parameters.c
>>> +++ b/app/test-pmd/parameters.c
>>> @@ -155,6 +155,7 @@ usage(char* progname)
>>>  	printf("  --rxhdrs=eth[,ipv4]*: set RX segment protocol to split.\n");
>>>  	printf("  --txpkts=X[,Y]*: set TX segment sizes"
>>>  		" or total packet length.\n");
>>> +	printf(" --multi-mempool: enable multi-mempool support\n");
>>
>> Indentation is wrong, one space is missing.
>>
>> Can you also update the '--mbuf-size=' definition, it has:
>> " ... extra memory pools will be created for allocating mbufs to receive
>> packets with buffer splitting features.", Now it is for both "buffer splitting
>> and multi Rx mempool features."
>> Even it can be possible to reference to new argument.
> Sure, will update. 
>>
>>>  	printf("  --txonly-multi-flow: generate multiple flows in txonly
>> mode\n");
>>>  	printf("  --tx-ip=src,dst: IP addresses in Tx-only mode\n");
>>>  	printf("  --tx-udp=src[,dst]: UDP ports in Tx-only mode\n"); @@
>>> -669,6 +670,7 @@ launch_args_parse(int argc, char** argv)
>>>  		{ "rxpkts",			1, 0, 0 },
>>>  		{ "rxhdrs",			1, 0, 0 },
>>>  		{ "txpkts",			1, 0, 0 },
>>> +		{ "multi-mempool",              0, 0, 0 },
>>
>> Thinking twice, I am not sure about the 'multi-mempool' name, because
>> 'mbuf-size' already cause to create multiple mempool, 'multi-mempool'
>> can be confusing.
>> As ethdev variable name is 'max_rx_mempools', what do you think to use
>> 'multi-rx-mempools' here as argument?
> 
> Yes, 'multi-rx-mempools' looks clean.
> 
>>
>>>  		{ "txonly-multi-flow",		0, 0, 0 },
>>>  		{ "rxq-share",			2, 0, 0 },
>>>  		{ "eth-link-speed",		1, 0, 0 },
>>> @@ -1295,6 +1297,8 @@ launch_args_parse(int argc, char** argv)
>>>  				else
>>>  					rte_exit(EXIT_FAILURE, "bad
>> txpkts\n");
>>>  			}
>>> +			if (!strcmp(lgopts[opt_idx].name, "multi-
>> mempool"))
>>> +				multi_mempool = 1;
>>>  			if (!strcmp(lgopts[opt_idx].name, "txonly-multi-
>> flow"))
>>>  				txonly_multi_flow = 1;
>>>  			if (!strcmp(lgopts[opt_idx].name, "rxq-share")) { diff
>> --git
>>> a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
>>> 4e25f77c6a..0bf2e4bd0d 100644
>>> --- a/app/test-pmd/testpmd.c
>>> +++ b/app/test-pmd/testpmd.c
>>> @@ -245,6 +245,7 @@ uint32_t max_rx_pkt_len;
>>>   */
>>>  uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
>>>  uint8_t  rx_pkt_nb_segs; /**< Number of segments to split */
>>> +uint8_t multi_mempool; /**< Enables multi-mempool feature */
>>>  uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
>>>  uint8_t  rx_pkt_nb_offs; /**< Number of specified offsets */
>>> uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT];
>>> @@ -258,6 +259,8 @@ uint16_t
>> tx_pkt_seg_lengths[RTE_MAX_SEGS_PER_PKT]
>>> = {  };  uint8_t  tx_pkt_nb_segs = 1; /**< Number of segments in
>>> TXONLY packets */
>>>
>>> +
>>> +
>>
>> Unintendend change.
> 
> Ack
>>
>>>  enum tx_pkt_split tx_pkt_split = TX_PKT_SPLIT_OFF;  /**< Split policy
>>> for packets to TX. */
>>>
>>> @@ -2659,24 +2662,9 @@ rx_queue_setup(uint16_t port_id, uint16_t
>> rx_queue_id,
>>>  	uint32_t prev_hdrs = 0;
>>>  	int ret;
>>>
>>> -	/* Verify Rx queue configuration is single pool and segment or
>>> -	 * multiple pool/segment.
>>> -	 * @see rte_eth_rxconf::rx_mempools
>>> -	 * @see rte_eth_rxconf::rx_seg
>>> -	 */
>>> -	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
>>> -	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) !=
>> 0))) {
>>> -		/* Single pool/segment configuration */
>>> -		rx_conf->rx_seg = NULL;
>>> -		rx_conf->rx_nseg = 0;
>>> -		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
>>> -					     nb_rx_desc, socket_id,
>>> -					     rx_conf, mp);
>>> -		goto exit;
>>> -	}
>>>
>>> -	if (rx_pkt_nb_segs > 1 ||
>>> -	    rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
>>> +	if ((rx_pkt_nb_segs > 1) &&
>>> +	    (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
>>>  		/* multi-segment configuration */
>>>  		for (i = 0; i < rx_pkt_nb_segs; i++) {
>>>  			struct rte_eth_rxseg_split *rx_seg =
>> &rx_useg[i].split; @@
>>> -2701,22 +2689,50 @@ rx_queue_setup(uint16_t port_id, uint16_t
>> rx_queue_id,
>>>  		}
>>>  		rx_conf->rx_nseg = rx_pkt_nb_segs;
>>>  		rx_conf->rx_seg = rx_useg;
>>> -	} else {
>>> +		rx_conf->rx_mempools = NULL;
>>> +		rx_conf->rx_nmempool = 0;
>>> +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
>> nb_rx_desc,
>>> +				    socket_id, rx_conf, NULL);
>>> +		rx_conf->rx_seg = NULL;
>>> +		rx_conf->rx_nseg = 0;
>>> +	} else if (multi_mempool == 1) {
>>>  		/* multi-pool configuration */
>>> +		struct rte_eth_dev_info dev_info;
>>> +
>>> +		if (mbuf_data_size_n <= 1) {
>>> +			RTE_LOG(ERR, EAL, "invalid number of mempools
>> %u",
>>> +				mbuf_data_size_n);
>>> +			return -EINVAL;
>>> +		}
>>> +		ret = rte_eth_dev_info_get(port_id, &dev_info);
>>> +		if (ret != 0)
>>> +			return ret;
>>> +		if (dev_info.max_rx_mempools == 0) {
>>> +			RTE_LOG(ERR, EAL, "device doesn't support
>> requested multi-mempool configuration");
>>> +			return -ENOTSUP;
>>> +		}
>>>  		for (i = 0; i < mbuf_data_size_n; i++) {
>>>  			mpx = mbuf_pool_find(socket_id, i);
>>>  			rx_mempool[i] = mpx ? mpx : mp;
>>>  		}
>>>  		rx_conf->rx_mempools = rx_mempool;
>>>  		rx_conf->rx_nmempool = mbuf_data_size_n;
>>> -	}
>>> -	ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
>>> +		rx_conf->rx_seg = NULL;
>>> +		rx_conf->rx_nseg = 0;
>>> +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
>> nb_rx_desc,
>>>  				    socket_id, rx_conf, NULL);
>>> -	rx_conf->rx_seg = NULL;
>>> -	rx_conf->rx_nseg = 0;
>>> -	rx_conf->rx_mempools = NULL;
>>> -	rx_conf->rx_nmempool = 0;
>>> -exit:
>>> +		rx_conf->rx_mempools = NULL;
>>> +		rx_conf->rx_nmempool = 0;
>>> +	} else {
>>> +		/* Single pool/segment configuration */
>>> +		rx_conf->rx_seg = NULL;
>>> +		rx_conf->rx_nseg = 0;
>>> +		rx_conf->rx_mempools = NULL;
>>> +		rx_conf->rx_nmempool = 0;
>>> +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
>> nb_rx_desc,
>>> +				    socket_id, rx_conf, mp);
>>> +	}
>>> +
>>
>> Technically execution can reach to this point without taking any of the
>> braches above, in that case there should be an error here instead of silently
>> continue.
>>
>> I think either there should be a check here, not sure how to do, or single
>> mempool can be the default setup out of the 'else' block. What do you
>> think?
>>
> Yes, default case(final else) is going to be single pool/segment. I think there is no need of error return.
> 
> This function(rx_queue_setup()) returns return of rte_eth_rx_queue_setup().
>  

ack

>>>  	ports[port_id].rxq[rx_queue_id].state = rx_conf->rx_deferred_start
>> ?
>>>
>> 	RTE_ETH_QUEUE_STATE_STOPPED :
>>>
>> 	RTE_ETH_QUEUE_STATE_STARTED;
>>> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index
>>> aaf69c349a..e4f9b142c9 100644
>>> --- a/app/test-pmd/testpmd.h
>>> +++ b/app/test-pmd/testpmd.h
>>> @@ -589,6 +589,7 @@ extern uint32_t max_rx_pkt_len;  extern
>> uint32_t
>>> rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT];
>>>  extern uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
>>>  extern uint8_t  rx_pkt_nb_segs; /**< Number of segments to split */
>>> +extern uint8_t multi_mempool; /**< Enables multi-mempool feature.
>> */
>>>  extern uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
>>>  extern uint8_t  rx_pkt_nb_offs; /**< Number of specified offsets */
>>>
>>> diff --git a/doc/guides/testpmd_app_ug/run_app.rst
>>> b/doc/guides/testpmd_app_ug/run_app.rst
>>> index 610e442924..329570e721 100644
>>> --- a/doc/guides/testpmd_app_ug/run_app.rst
>>> +++ b/doc/guides/testpmd_app_ug/run_app.rst
>>> @@ -365,6 +365,10 @@ The command line options are:
>>>      Set TX segment sizes or total packet length. Valid for ``tx-only``
>>>      and ``flowgen`` forwarding modes.
>>>
>>> +* ``--multi-mempool``
>>> +
>>> +    Enable multi-mempool, multiple mbuf pools per Rx queue, support.
>>> +
>>>  *   ``--txonly-multi-flow``
>>>
>>>      Generate multiple flows in txonly mode.
> 


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v7 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-21 14:33         ` [PATCH v6 " Hanumanth Pothula
  2022-11-21 17:31           ` Ferruh Yigit
@ 2022-11-21 18:07           ` Hanumanth Pothula
  2022-11-21 18:40             ` Ferruh Yigit
  2022-11-22  6:42             ` Han, YingyaX
  1 sibling, 2 replies; 27+ messages in thread
From: Hanumanth Pothula @ 2022-11-21 18:07 UTC (permalink / raw)
  To: Aman Singh, Yuying Zhang
  Cc: dev, andrew.rybchenko, thomas, yux.jiang, jerinj, ndabilpuram, hpothula

Validate ethdev parameter 'max_rx_mempools' to know whether
device supports multi-mempool feature or not.

Also, add new testpmd command line argument, multi-mempool,
to control multi-mempool feature. By default its disabled.

Bugzilla ID: 1128
Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue")

Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>

---
v7:
 - Update testpmd argument name from multi-mempool to multi-rx-mempool.
 - Upated defination of testpmd argument, mbuf-size.
 - Resolved indentations.
v6:
 - Updated run_app.rst file with multi-mempool argument.
 - defined and populated multi_mempool at related arguments.
 - invoking rte_eth_dev_info_get() withing multi-mempool condition
v5:
 - Added testpmd argument to enable multi-mempool feature.
 - Simplified logic to distinguish between multi-mempool,
   multi-segment and single pool/segment.
v4:
 - updated if condition.
v3:
 - Simplified conditional check.
 - Corrected spell, whether.
v2:
 - Rebased on tip of next-net/main.
---
 app/test-pmd/parameters.c             |  7 ++-
 app/test-pmd/testpmd.c                | 64 ++++++++++++++++-----------
 app/test-pmd/testpmd.h                |  1 +
 doc/guides/testpmd_app_ug/run_app.rst |  4 ++
 4 files changed, 50 insertions(+), 26 deletions(-)

diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index aed4cdcb84..af9ec39cf9 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -88,7 +88,8 @@ usage(char* progname)
 	       "in NUMA mode.\n");
 	printf("  --mbuf-size=N,[N1[,..Nn]: set the data size of mbuf to "
 	       "N bytes. If multiple numbers are specified the extra pools "
-	       "will be created to receive with packet split features\n");
+	       "will be created to receive packets based on the features "
+	       "supported, like buufer-split, multi-mempool.\n");
 	printf("  --total-num-mbufs=N: set the number of mbufs to be allocated "
 	       "in mbuf pools.\n");
 	printf("  --max-pkt-len=N: set the maximum size of packet to N bytes.\n");
@@ -155,6 +156,7 @@ usage(char* progname)
 	printf("  --rxhdrs=eth[,ipv4]*: set RX segment protocol to split.\n");
 	printf("  --txpkts=X[,Y]*: set TX segment sizes"
 		" or total packet length.\n");
+	printf("  --multi-rx-mempool: enable multi-mempool support\n");
 	printf("  --txonly-multi-flow: generate multiple flows in txonly mode\n");
 	printf("  --tx-ip=src,dst: IP addresses in Tx-only mode\n");
 	printf("  --tx-udp=src[,dst]: UDP ports in Tx-only mode\n");
@@ -669,6 +671,7 @@ launch_args_parse(int argc, char** argv)
 		{ "rxpkts",			1, 0, 0 },
 		{ "rxhdrs",			1, 0, 0 },
 		{ "txpkts",			1, 0, 0 },
+		{ "multi-rx-mempool",           0, 0, 0 },
 		{ "txonly-multi-flow",		0, 0, 0 },
 		{ "rxq-share",			2, 0, 0 },
 		{ "eth-link-speed",		1, 0, 0 },
@@ -1295,6 +1298,8 @@ launch_args_parse(int argc, char** argv)
 				else
 					rte_exit(EXIT_FAILURE, "bad txpkts\n");
 			}
+			if (!strcmp(lgopts[opt_idx].name, "multi-rx-mempool"))
+				multi_rx_mempool = 1;
 			if (!strcmp(lgopts[opt_idx].name, "txonly-multi-flow"))
 				txonly_multi_flow = 1;
 			if (!strcmp(lgopts[opt_idx].name, "rxq-share")) {
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 4e25f77c6a..716937925e 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -245,6 +245,7 @@ uint32_t max_rx_pkt_len;
  */
 uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
 uint8_t  rx_pkt_nb_segs; /**< Number of segments to split */
+uint8_t multi_rx_mempool; /**< Enables multi-mempool feature */
 uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
 uint8_t  rx_pkt_nb_offs; /**< Number of specified offsets */
 uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT];
@@ -2659,24 +2660,9 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 	uint32_t prev_hdrs = 0;
 	int ret;
 
-	/* Verify Rx queue configuration is single pool and segment or
-	 * multiple pool/segment.
-	 * @see rte_eth_rxconf::rx_mempools
-	 * @see rte_eth_rxconf::rx_seg
-	 */
-	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
-	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
-		/* Single pool/segment configuration */
-		rx_conf->rx_seg = NULL;
-		rx_conf->rx_nseg = 0;
-		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
-					     nb_rx_desc, socket_id,
-					     rx_conf, mp);
-		goto exit;
-	}
 
-	if (rx_pkt_nb_segs > 1 ||
-	    rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
+	if ((rx_pkt_nb_segs > 1) &&
+	    (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
 		/* multi-segment configuration */
 		for (i = 0; i < rx_pkt_nb_segs; i++) {
 			struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split;
@@ -2701,22 +2687,50 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 		}
 		rx_conf->rx_nseg = rx_pkt_nb_segs;
 		rx_conf->rx_seg = rx_useg;
-	} else {
+		rx_conf->rx_mempools = NULL;
+		rx_conf->rx_nmempool = 0;
+		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
+				    socket_id, rx_conf, NULL);
+		rx_conf->rx_seg = NULL;
+		rx_conf->rx_nseg = 0;
+	} else if (multi_rx_mempool == 1) {
 		/* multi-pool configuration */
+		struct rte_eth_dev_info dev_info;
+
+		if (mbuf_data_size_n <= 1) {
+			RTE_LOG(ERR, EAL, "invalid number of mempools %u",
+				mbuf_data_size_n);
+			return -EINVAL;
+		}
+		ret = rte_eth_dev_info_get(port_id, &dev_info);
+		if (ret != 0)
+			return ret;
+		if (dev_info.max_rx_mempools == 0) {
+			RTE_LOG(ERR, EAL, "device doesn't support requested multi-mempool configuration");
+			return -ENOTSUP;
+		}
 		for (i = 0; i < mbuf_data_size_n; i++) {
 			mpx = mbuf_pool_find(socket_id, i);
 			rx_mempool[i] = mpx ? mpx : mp;
 		}
 		rx_conf->rx_mempools = rx_mempool;
 		rx_conf->rx_nmempool = mbuf_data_size_n;
-	}
-	ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
+		rx_conf->rx_seg = NULL;
+		rx_conf->rx_nseg = 0;
+		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
 				    socket_id, rx_conf, NULL);
-	rx_conf->rx_seg = NULL;
-	rx_conf->rx_nseg = 0;
-	rx_conf->rx_mempools = NULL;
-	rx_conf->rx_nmempool = 0;
-exit:
+		rx_conf->rx_mempools = NULL;
+		rx_conf->rx_nmempool = 0;
+	} else {
+		/* Single pool/segment configuration */
+		rx_conf->rx_seg = NULL;
+		rx_conf->rx_nseg = 0;
+		rx_conf->rx_mempools = NULL;
+		rx_conf->rx_nmempool = 0;
+		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
+				    socket_id, rx_conf, mp);
+	}
+
 	ports[port_id].rxq[rx_queue_id].state = rx_conf->rx_deferred_start ?
 						RTE_ETH_QUEUE_STATE_STOPPED :
 						RTE_ETH_QUEUE_STATE_STARTED;
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index aaf69c349a..0596d38cd2 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -589,6 +589,7 @@ extern uint32_t max_rx_pkt_len;
 extern uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT];
 extern uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
 extern uint8_t  rx_pkt_nb_segs; /**< Number of segments to split */
+extern uint8_t multi_rx_mempool; /**< Enables multi-mempool feature. */
 extern uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
 extern uint8_t  rx_pkt_nb_offs; /**< Number of specified offsets */
 
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index 610e442924..af84b2260a 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -365,6 +365,10 @@ The command line options are:
     Set TX segment sizes or total packet length. Valid for ``tx-only``
     and ``flowgen`` forwarding modes.
 
+* ``--multi-rx-mempool``
+
+    Enable multi-mempool, multiple mbuf pools per Rx queue, support.
+
 *   ``--txonly-multi-flow``
 
     Generate multiple flows in txonly mode.
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH v7 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-21 18:07           ` [PATCH v7 " Hanumanth Pothula
@ 2022-11-21 18:40             ` Ferruh Yigit
  2022-11-22  6:42             ` Han, YingyaX
  1 sibling, 0 replies; 27+ messages in thread
From: Ferruh Yigit @ 2022-11-21 18:40 UTC (permalink / raw)
  To: Hanumanth Pothula, Aman Singh, Yuying Zhang
  Cc: dev, andrew.rybchenko, thomas, yux.jiang, jerinj, ndabilpuram

On 11/21/2022 6:07 PM, Hanumanth Pothula wrote:
> Validate ethdev parameter 'max_rx_mempools' to know whether
> device supports multi-mempool feature or not.
> 
> Also, add new testpmd command line argument, multi-mempool,
> to control multi-mempool feature. By default its disabled.

s/multi-mempool/multi-rx-mempool/

Also moving argument paragraph up.

> 
> Bugzilla ID: 1128
> Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue")
> 
> Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>

Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>

With noted issues fixed,
Applied to dpdk-next-net/main, thanks.


Lets wait test report before requesting this to be merged to main repo.

@Yu, @Yuying,

Can you please verify this at the latest head of the next-net tree?

Thanks,
ferruh

> 
> ---
> v7:
>  - Update testpmd argument name from multi-mempool to multi-rx-mempool.
>  - Upated defination of testpmd argument, mbuf-size.
>  - Resolved indentations.
> v6:
>  - Updated run_app.rst file with multi-mempool argument.
>  - defined and populated multi_mempool at related arguments.
>  - invoking rte_eth_dev_info_get() withing multi-mempool condition
> v5:
>  - Added testpmd argument to enable multi-mempool feature.
>  - Simplified logic to distinguish between multi-mempool,
>    multi-segment and single pool/segment.
> v4:
>  - updated if condition.
> v3:
>  - Simplified conditional check.
>  - Corrected spell, whether.
> v2:
>  - Rebased on tip of next-net/main.
> ---
>  app/test-pmd/parameters.c             |  7 ++-
>  app/test-pmd/testpmd.c                | 64 ++++++++++++++++-----------
>  app/test-pmd/testpmd.h                |  1 +
>  doc/guides/testpmd_app_ug/run_app.rst |  4 ++
>  4 files changed, 50 insertions(+), 26 deletions(-)
> 
> diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
> index aed4cdcb84..af9ec39cf9 100644
> --- a/app/test-pmd/parameters.c
> +++ b/app/test-pmd/parameters.c
> @@ -88,7 +88,8 @@ usage(char* progname)
>  	       "in NUMA mode.\n");
>  	printf("  --mbuf-size=N,[N1[,..Nn]: set the data size of mbuf to "
>  	       "N bytes. If multiple numbers are specified the extra pools "
> -	       "will be created to receive with packet split features\n");
> +	       "will be created to receive packets based on the features "
> +	       "supported, like buufer-split, multi-mempool.\n");

s/buufer/buffer/

>  	printf("  --total-num-mbufs=N: set the number of mbufs to be allocated "
>  	       "in mbuf pools.\n");
>  	printf("  --max-pkt-len=N: set the maximum size of packet to N bytes.\n");
> @@ -155,6 +156,7 @@ usage(char* progname)
>  	printf("  --rxhdrs=eth[,ipv4]*: set RX segment protocol to split.\n");
>  	printf("  --txpkts=X[,Y]*: set TX segment sizes"
>  		" or total packet length.\n");
> +	printf("  --multi-rx-mempool: enable multi-mempool support\n");
>  	printf("  --txonly-multi-flow: generate multiple flows in txonly mode\n");
>  	printf("  --tx-ip=src,dst: IP addresses in Tx-only mode\n");
>  	printf("  --tx-udp=src[,dst]: UDP ports in Tx-only mode\n");
> @@ -669,6 +671,7 @@ launch_args_parse(int argc, char** argv)
>  		{ "rxpkts",			1, 0, 0 },
>  		{ "rxhdrs",			1, 0, 0 },
>  		{ "txpkts",			1, 0, 0 },
> +		{ "multi-rx-mempool",           0, 0, 0 },
>  		{ "txonly-multi-flow",		0, 0, 0 },
>  		{ "rxq-share",			2, 0, 0 },
>  		{ "eth-link-speed",		1, 0, 0 },
> @@ -1295,6 +1298,8 @@ launch_args_parse(int argc, char** argv)
>  				else
>  					rte_exit(EXIT_FAILURE, "bad txpkts\n");
>  			}
> +			if (!strcmp(lgopts[opt_idx].name, "multi-rx-mempool"))
> +				multi_rx_mempool = 1;
>  			if (!strcmp(lgopts[opt_idx].name, "txonly-multi-flow"))
>  				txonly_multi_flow = 1;
>  			if (!strcmp(lgopts[opt_idx].name, "rxq-share")) {
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 4e25f77c6a..716937925e 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -245,6 +245,7 @@ uint32_t max_rx_pkt_len;
>   */
>  uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
>  uint8_t  rx_pkt_nb_segs; /**< Number of segments to split */
> +uint8_t multi_rx_mempool; /**< Enables multi-mempool feature */
>  uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
>  uint8_t  rx_pkt_nb_offs; /**< Number of specified offsets */
>  uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT];

Better to move new variable out of packet split related variables, and
below them.

> @@ -2659,24 +2660,9 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
>  	uint32_t prev_hdrs = 0;
>  	int ret;
>  
> -	/* Verify Rx queue configuration is single pool and segment or
> -	 * multiple pool/segment.
> -	 * @see rte_eth_rxconf::rx_mempools
> -	 * @see rte_eth_rxconf::rx_seg
> -	 */
> -	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> -	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
> -		/* Single pool/segment configuration */
> -		rx_conf->rx_seg = NULL;
> -		rx_conf->rx_nseg = 0;
> -		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
> -					     nb_rx_desc, socket_id,
> -					     rx_conf, mp);
> -		goto exit;
> -	}
>  
> -	if (rx_pkt_nb_segs > 1 ||
> -	    rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
> +	if ((rx_pkt_nb_segs > 1) &&
> +	    (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
>  		/* multi-segment configuration */
>  		for (i = 0; i < rx_pkt_nb_segs; i++) {
>  			struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split;
> @@ -2701,22 +2687,50 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
>  		}
>  		rx_conf->rx_nseg = rx_pkt_nb_segs;
>  		rx_conf->rx_seg = rx_useg;
> -	} else {
> +		rx_conf->rx_mempools = NULL;
> +		rx_conf->rx_nmempool = 0;
> +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
> +				    socket_id, rx_conf, NULL);
> +		rx_conf->rx_seg = NULL;
> +		rx_conf->rx_nseg = 0;
> +	} else if (multi_rx_mempool == 1) {
>  		/* multi-pool configuration */
> +		struct rte_eth_dev_info dev_info;
> +
> +		if (mbuf_data_size_n <= 1) {
> +			RTE_LOG(ERR, EAL, "invalid number of mempools %u",

This for EAL logs, not for testpmd, converting them to
"fprintf(stderr, .." as done in rest of the file.

> +				mbuf_data_size_n);
> +			return -EINVAL;
> +		}
> +		ret = rte_eth_dev_info_get(port_id, &dev_info);
> +		if (ret != 0)
> +			return ret;
> +		if (dev_info.max_rx_mempools == 0) {
> +			RTE_LOG(ERR, EAL, "device doesn't support requested multi-mempool configuration");
> +			return -ENOTSUP;
> +		}
>  		for (i = 0; i < mbuf_data_size_n; i++) {
>  			mpx = mbuf_pool_find(socket_id, i);
>  			rx_mempool[i] = mpx ? mpx : mp;
>  		}
>  		rx_conf->rx_mempools = rx_mempool;
>  		rx_conf->rx_nmempool = mbuf_data_size_n;
> -	}
> -	ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
> +		rx_conf->rx_seg = NULL;
> +		rx_conf->rx_nseg = 0;
> +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
>  				    socket_id, rx_conf, NULL);
> -	rx_conf->rx_seg = NULL;
> -	rx_conf->rx_nseg = 0;
> -	rx_conf->rx_mempools = NULL;
> -	rx_conf->rx_nmempool = 0;
> -exit:
> +		rx_conf->rx_mempools = NULL;
> +		rx_conf->rx_nmempool = 0;
> +	} else {
> +		/* Single pool/segment configuration */
> +		rx_conf->rx_seg = NULL;
> +		rx_conf->rx_nseg = 0;
> +		rx_conf->rx_mempools = NULL;
> +		rx_conf->rx_nmempool = 0;
> +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
> +				    socket_id, rx_conf, mp);
> +	}
> +
>  	ports[port_id].rxq[rx_queue_id].state = rx_conf->rx_deferred_start ?
>  						RTE_ETH_QUEUE_STATE_STOPPED :
>  						RTE_ETH_QUEUE_STATE_STARTED;
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index aaf69c349a..0596d38cd2 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -589,6 +589,7 @@ extern uint32_t max_rx_pkt_len;
>  extern uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT];
>  extern uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
>  extern uint8_t  rx_pkt_nb_segs; /**< Number of segments to split */
> +extern uint8_t multi_rx_mempool; /**< Enables multi-mempool feature. */
>  extern uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
>  extern uint8_t  rx_pkt_nb_offs; /**< Number of specified offsets */
>  
> diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
> index 610e442924..af84b2260a 100644
> --- a/doc/guides/testpmd_app_ug/run_app.rst
> +++ b/doc/guides/testpmd_app_ug/run_app.rst
> @@ -365,6 +365,10 @@ The command line options are:
>      Set TX segment sizes or total packet length. Valid for ``tx-only``
>      and ``flowgen`` forwarding modes.
>  
> +* ``--multi-rx-mempool``
> +
> +    Enable multi-mempool, multiple mbuf pools per Rx queue, support.
> +
>  *   ``--txonly-multi-flow``
>  
>      Generate multiple flows in txonly mode.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [PATCH v7 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-21 18:07           ` [PATCH v7 " Hanumanth Pothula
  2022-11-21 18:40             ` Ferruh Yigit
@ 2022-11-22  6:42             ` Han, YingyaX
  2022-11-22  6:52               ` Tang, Yaqi
  1 sibling, 1 reply; 27+ messages in thread
From: Han, YingyaX @ 2022-11-22  6:42 UTC (permalink / raw)
  To: Hanumanth Pothula, Singh, Aman Deep, Zhang, Yuying
  Cc: dev, andrew.rybchenko, thomas, Jiang, YuX, jerinj, ndabilpuram


> -----Original Message-----
> From: Hanumanth Pothula <hpothula@marvell.com>
> Sent: Tuesday, November 22, 2022 2:08 AM
> To: Singh, Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> <yuying.zhang@intel.com>
> Cc: dev@dpdk.org; andrew.rybchenko@oktetlabs.ru;
> thomas@monjalon.net; Jiang, YuX <yux.jiang@intel.com>;
> jerinj@marvell.com; ndabilpuram@marvell.com; hpothula@marvell.com
> Subject: [PATCH v7 1/1] app/testpmd: add valid check to verify multi
> mempool feature
> 
> Validate ethdev parameter 'max_rx_mempools' to know whether device
> supports multi-mempool feature or not.
> 
> Also, add new testpmd command line argument, multi-mempool, to control
> multi-mempool feature. By default its disabled.
> 
> Bugzilla ID: 1128
> Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx
> queue")
> 
> Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>

Tested-by: Yingya Han <yingyax.han@intel.com>

 


^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [PATCH v7 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-22  6:42             ` Han, YingyaX
@ 2022-11-22  6:52               ` Tang, Yaqi
  2022-11-22  8:33                 ` Ferruh Yigit
  0 siblings, 1 reply; 27+ messages in thread
From: Tang, Yaqi @ 2022-11-22  6:52 UTC (permalink / raw)
  To: Han, YingyaX, Hanumanth Pothula, Singh, Aman Deep, Zhang, Yuying
  Cc: dev, andrew.rybchenko, thomas, Jiang, YuX, jerinj, ndabilpuram


> -----Original Message-----
> From: Han, YingyaX <yingyax.han@intel.com>
> Sent: Tuesday, November 22, 2022 2:43 PM
> To: Hanumanth Pothula <hpothula@marvell.com>; Singh, Aman Deep
> <aman.deep.singh@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> Cc: dev@dpdk.org; andrew.rybchenko@oktetlabs.ru; thomas@monjalon.net;
> Jiang, YuX <yux.jiang@intel.com>; jerinj@marvell.com;
> ndabilpuram@marvell.com
> Subject: RE: [PATCH v7 1/1] app/testpmd: add valid check to verify multi
> mempool feature
> 
> 
> > -----Original Message-----
> > From: Hanumanth Pothula <hpothula@marvell.com>
> > Sent: Tuesday, November 22, 2022 2:08 AM
> > To: Singh, Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> > <yuying.zhang@intel.com>
> > Cc: dev@dpdk.org; andrew.rybchenko@oktetlabs.ru;
> thomas@monjalon.net;
> > Jiang, YuX <yux.jiang@intel.com>; jerinj@marvell.com;
> > ndabilpuram@marvell.com; hpothula@marvell.com
> > Subject: [PATCH v7 1/1] app/testpmd: add valid check to verify multi
> > mempool feature
> >
> > Validate ethdev parameter 'max_rx_mempools' to know whether device
> > supports multi-mempool feature or not.
> >
> > Also, add new testpmd command line argument, multi-mempool, to
> control
> > multi-mempool feature. By default its disabled.
> >
> > Bugzilla ID: 1128
> > Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx
> > queue")
> >
> > Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
> 
> Tested-by: Yingya Han <yingyax.han@intel.com>
> 
> 

Tested-by: Yaqi Tang <yaqi.tang@intel.com>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v7 1/1] app/testpmd: add valid check to verify multi mempool feature
  2022-11-22  6:52               ` Tang, Yaqi
@ 2022-11-22  8:33                 ` Ferruh Yigit
  0 siblings, 0 replies; 27+ messages in thread
From: Ferruh Yigit @ 2022-11-22  8:33 UTC (permalink / raw)
  To: Tang, Yaqi, Han, YingyaX, Hanumanth Pothula, Singh, Aman Deep,
	Zhang, Yuying
  Cc: dev, andrew.rybchenko, thomas, Jiang, YuX, jerinj, ndabilpuram

On 11/22/2022 6:52 AM, Tang, Yaqi wrote:
> 
>> -----Original Message-----
>> From: Han, YingyaX <yingyax.han@intel.com>
>> Sent: Tuesday, November 22, 2022 2:43 PM
>> To: Hanumanth Pothula <hpothula@marvell.com>; Singh, Aman Deep
>> <aman.deep.singh@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
>> Cc: dev@dpdk.org; andrew.rybchenko@oktetlabs.ru; thomas@monjalon.net;
>> Jiang, YuX <yux.jiang@intel.com>; jerinj@marvell.com;
>> ndabilpuram@marvell.com
>> Subject: RE: [PATCH v7 1/1] app/testpmd: add valid check to verify multi
>> mempool feature
>>
>>
>>> -----Original Message-----
>>> From: Hanumanth Pothula <hpothula@marvell.com>
>>> Sent: Tuesday, November 22, 2022 2:08 AM
>>> To: Singh, Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
>>> <yuying.zhang@intel.com>
>>> Cc: dev@dpdk.org; andrew.rybchenko@oktetlabs.ru;
>> thomas@monjalon.net;
>>> Jiang, YuX <yux.jiang@intel.com>; jerinj@marvell.com;
>>> ndabilpuram@marvell.com; hpothula@marvell.com
>>> Subject: [PATCH v7 1/1] app/testpmd: add valid check to verify multi
>>> mempool feature
>>>
>>> Validate ethdev parameter 'max_rx_mempools' to know whether device
>>> supports multi-mempool feature or not.
>>>
>>> Also, add new testpmd command line argument, multi-mempool, to
>> control
>>> multi-mempool feature. By default its disabled.
>>>
>>> Bugzilla ID: 1128
>>> Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx
>>> queue")
>>>
>>> Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
>>
>> Tested-by: Yingya Han <yingyax.han@intel.com>
>>
>>
> 
> Tested-by: Yaqi Tang <yaqi.tang@intel.com>

Thanks Yingya, Yaqi, I have updated commit log with these tags.

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2022-11-22  8:33 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-17 11:30 [PATCH v1 1/1] app/testpmd: add valid check to verify multi mempool feature Hanumanth Pothula
2022-11-17 12:55 ` [PATCH v2 " Hanumanth Pothula
2022-11-17 15:00   ` Singh, Aman Deep
2022-11-17 15:58     ` [EXT] " Hanumanth Reddy Pothula
2022-11-17 16:03   ` [PATCH v3 " Hanumanth Pothula
2022-11-17 23:36     ` Ferruh Yigit
2022-11-18  6:51       ` Han, YingyaX
2022-11-18 11:37         ` Hanumanth Reddy Pothula
2022-11-18 11:13   ` Hanumanth Pothula
2022-11-18 14:13     ` [PATCH v4 " Hanumanth Pothula
2022-11-18 20:56       ` Ferruh Yigit
2022-11-19  0:00         ` [EXT] " Hanumanth Reddy Pothula
2022-11-21 10:08           ` Ferruh Yigit
2022-11-21 10:44             ` Hanumanth Reddy Pothula
2022-11-21 12:45       ` [PATCH v5 " Hanumanth Pothula
2022-11-21 13:22         ` Ferruh Yigit
2022-11-21 13:36           ` [EXT] " Hanumanth Reddy Pothula
2022-11-21 14:10             ` Ferruh Yigit
2022-11-21 14:33         ` [PATCH v6 " Hanumanth Pothula
2022-11-21 17:31           ` Ferruh Yigit
2022-11-21 17:45             ` [EXT] " Hanumanth Reddy Pothula
2022-11-21 18:05               ` Ferruh Yigit
2022-11-21 18:07           ` [PATCH v7 " Hanumanth Pothula
2022-11-21 18:40             ` Ferruh Yigit
2022-11-22  6:42             ` Han, YingyaX
2022-11-22  6:52               ` Tang, Yaqi
2022-11-22  8:33                 ` Ferruh Yigit

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.