All of lore.kernel.org
 help / color / mirror / Atom feed
From: Hanumanth Reddy Pothula <hpothula@marvell.com>
To: Ferruh Yigit <ferruh.yigit@amd.com>,
	"thomas@monjalon.net" <thomas@monjalon.net>,
	"andrew.rybchenko@oktetlabs.ru" <andrew.rybchenko@oktetlabs.ru>,
	Nithin Kumar Dabilpuram <ndabilpuram@marvell.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	"yux.jiang@intel.com" <yux.jiang@intel.com>,
	Jerin Jacob Kollanukkaran <jerinj@marvell.com>,
	Aman Singh <aman.deep.singh@intel.com>,
	Yuying Zhang <yuying.zhang@intel.com>
Subject: RE: [EXT] Re: [PATCH v4 1/1] app/testpmd: add valid check to verify multi mempool feature
Date: Mon, 21 Nov 2022 10:44:39 +0000	[thread overview]
Message-ID: <PH0PR18MB47507E0C8D10D60D1E18115BCB0A9@PH0PR18MB4750.namprd18.prod.outlook.com> (raw)
In-Reply-To: <31f1f1b3-e04d-4a99-64a7-d1d69004752f@amd.com>



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Monday, November 21, 2022 3:38 PM
> To: Hanumanth Reddy Pothula <hpothula@marvell.com>;
> thomas@monjalon.net; andrew.rybchenko@oktetlabs.ru; Nithin Kumar
> Dabilpuram <ndabilpuram@marvell.com>
> Cc: dev@dpdk.org; yux.jiang@intel.com; Jerin Jacob Kollanukkaran
> <jerinj@marvell.com>; Aman Singh <aman.deep.singh@intel.com>; Yuying
> Zhang <yuying.zhang@intel.com>
> Subject: Re: [EXT] Re: [PATCH v4 1/1] app/testpmd: add valid check to
> verify multi mempool feature
> 
> On 11/19/2022 12:00 AM, Hanumanth Reddy Pothula wrote:
> >
> >
> >> -----Original Message-----
> >> From: Ferruh Yigit <ferruh.yigit@amd.com>
> >> Sent: Saturday, November 19, 2022 2:26 AM
> >> To: Hanumanth Reddy Pothula <hpothula@marvell.com>;
> >> thomas@monjalon.net; andrew.rybchenko@oktetlabs.ru; Nithin Kumar
> >> Dabilpuram <ndabilpuram@marvell.com>
> >> Cc: dev@dpdk.org; yux.jiang@intel.com; Jerin Jacob Kollanukkaran
> >> <jerinj@marvell.com>; Aman Singh <aman.deep.singh@intel.com>;
> Yuying
> >> Zhang <yuying.zhang@intel.com>
> >> Subject: [EXT] Re: [PATCH v4 1/1] app/testpmd: add valid check to
> >> verify multi mempool feature
> >>
> >> External Email
> >>
> >> ---------------------------------------------------------------------
> >> - On 11/18/2022 2:13 PM, Hanumanth Pothula wrote:
> >>> Validate ethdev parameter 'max_rx_mempools' to know whether
> device
> >>> supports multi-mempool feature or not.
> >>>
> >>
> >> My preference would be revert the testpmd patch [1] that adds this
> >> new feature after -rc2, and add it back next release with new testpmd
> >> argument and below mentioned changes in setup function.
> >>
> >> @Andrew, @Thomas, @Jerin, what do you think?
> >>
> >>
> >> [1]
> >> 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx
> >> queue")
> >>
> >>> Bugzilla ID: 1128
> >>>
> >>
> >> Can you please add fixes line?
> >>
> > Ack
> >>> Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
> >>
> >> Please put the changelog after '---', which than git will take it as note.
> >>
> > Ack
> >>> v4:
> >>>  - updated if condition.
> >>> v3:
> >>>  - Simplified conditional check.
> >>>  - Corrected spell, whether.
> >>> v2:
> >>>  - Rebased on tip of next-net/main.
> >>> ---
> >>>  app/test-pmd/testpmd.c | 10 ++++++++--
> >>>  1 file changed, 8 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> >>> 4e25f77c6a..c1b4dbd716 100644
> >>> --- a/app/test-pmd/testpmd.c
> >>> +++ b/app/test-pmd/testpmd.c
> >>> @@ -2655,17 +2655,23 @@ rx_queue_setup(uint16_t port_id,
> uint16_t
> >> rx_queue_id,
> >>>  	union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
> >>>  	struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
> >>>  	struct rte_mempool *mpx;
> >>> +	struct rte_eth_dev_info dev_info;
> >>>  	unsigned int i, mp_n;
> >>>  	uint32_t prev_hdrs = 0;
> >>>  	int ret;
> >>>
> >>> +	ret = rte_eth_dev_info_get(port_id, &dev_info);
> >>> +	if (ret != 0)
> >>> +		return ret;
> >>> +
> >>>  	/* Verify Rx queue configuration is single pool and segment or
> >>>  	 * multiple pool/segment.
> >>> +	 * @see rte_eth_dev_info::max_rx_mempools
> >>>  	 * @see rte_eth_rxconf::rx_mempools
> >>>  	 * @see rte_eth_rxconf::rx_seg
> >>>  	 */
> >>> -	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> >>> -	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) !=
> >> 0))) {
> >>> +	if ((dev_info.max_rx_mempools == 0) && (rx_pkt_nb_segs <= 1 ||
> >>
> >> Using `dev_info.max_rx_mempools` for check means if device supports
> >> multiple mempool, multiple mempool will be configured independent
> >> from user configuration. But user may prefer singe mempool or buffer
> split.
> >>
> > Please find my suggested logic.
> >
> >> Right now only PMD support multiple mempool is 'cnxk', so this
> >> doesn't impact others but I think this is not correct.
> >>
> >> Instead of re-using testpmd "mbuf-size" parameter (it is already used
> >> for two other features, and this is the reason of the defect) it
> >> would be better to have an explicit parameter for multiple mempool
> feature.
> >>
> >>
> >>> +	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) ==
> >> 0))) {
> >>>  		/* Single pool/segment configuration */
> >>>  		rx_conf->rx_seg = NULL;
> >>>  		rx_conf->rx_nseg = 0;
> >>
> >>
> >> Logic seems correct, although I have not tested.
> >>
> >> Current functions tries to detect the requested feature and setup
> >> queues accordingly, features are:
> >> - single mempool
> >> - packet split (to multiple mempool)
> >> - multiple mempool (various size)
> >>
> >> And the logic in the function is:
> >> ``
> >> if ( (! multiple mempool) && (! packet split))
> >> 	setup for single mempool
> >> 	exit
> >>
> >> if (packet split)
> >> 	setup packet split
> >> else
> >> 	setup multiple mempool
> >> ``
> >>
> >> What do you think to
> >> a) simplify logic by making single mempool as fallback and last
> >> option, instead of detecting non existence of other configs
> >> b) have explicit check for multiple mempool
> >>
> >> Like:
> >>
> >> ``
> >> if (packet split)
> >> 	setup packet split
> >> 	exit
> >> else if (multiple mempool)
> >> 	setup multiple mempool
> >> 	exit
> >>
> >> setup for single mempool
> >> ``
> >>
> >> I think this both solves the defect and simplifies the code.
> >
> > Yes Ferruh your suggested logic simplifies the code.
> >
> > In the lines of your proposed logic,  below if conditions might work
> > fine for all features(buffer-split/multi-mempool) supported by PMD and
> > user preference,
> >
> > if (rx_pkt_nb_segs > 1 ||
> >             rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
> > 	/*multi-segment (buffer split)*/
> > } else if (mbuf_data_size_n > 1 && dev_info.max_rx_mempools > 1) {
> > 	/*multi-mempool*/
> > } else {
> > 	/* single pool and segment */
> > }
> >
> 
> `mbuf_data_size_n > 1` may mean user is requesting multiple segment, or
> buffer split, so I am not sure about using this value to decide on
> multiple mempool feature, it can create side effect as Bug 1128 does.
> 
> > Or  adding new Rx offload parameter for multi_mempool feature, I think it
> might not be required, using dev_info.max_rx_mempools works fine.
> >
> 
> In ethdev level, we don't need an offload flag, since separated config
> options clarify the intention there.
> What is needed is a way to understand users intention, for application
> (testpmd) and configure device accordingly.
> That is why I think 'dev_info.max_rx_mempools' is not working fine,
> because that is a way for device to say multiple mempool is supported,
> it is not to get user intention.
> In your logic when device supports multiple mempool, use it independent
> from what user request.
> 
> I suggest having a testpmd argument explicitly to say multiple mempool
> feature is requested, this will help to distinguish buffer split,
> multiple mempool, single mempool features.
> 
> Thanks.

Sure, will upload new patch-set with testpmd argument, which tells multiple mempool feature is enabled/disabled.

> 
> > if (rx_pkt_nb_segs > 1 ||
> >             rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
> > 	/*multi-segment (buffer split)*/
> > } else if (mbuf_data_size_n > 1 && rx_conf->offloads &
> RTE_ETH_RX_OFFLOAD_MULTI_MEMPOOL ) {
> > 	/*multi-mempool*/
> > } else {
> > 	/* single pool and segment */
> > }
> >
> > Please let me know your inputs on above logic.
> >


  reply	other threads:[~2022-11-21 10:44 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-17 11:30 [PATCH v1 1/1] app/testpmd: add valid check to verify multi mempool feature Hanumanth Pothula
2022-11-17 12:55 ` [PATCH v2 " Hanumanth Pothula
2022-11-17 15:00   ` Singh, Aman Deep
2022-11-17 15:58     ` [EXT] " Hanumanth Reddy Pothula
2022-11-17 16:03   ` [PATCH v3 " Hanumanth Pothula
2022-11-17 23:36     ` Ferruh Yigit
2022-11-18  6:51       ` Han, YingyaX
2022-11-18 11:37         ` Hanumanth Reddy Pothula
2022-11-18 11:13   ` Hanumanth Pothula
2022-11-18 14:13     ` [PATCH v4 " Hanumanth Pothula
2022-11-18 20:56       ` Ferruh Yigit
2022-11-19  0:00         ` [EXT] " Hanumanth Reddy Pothula
2022-11-21 10:08           ` Ferruh Yigit
2022-11-21 10:44             ` Hanumanth Reddy Pothula [this message]
2022-11-21 12:45       ` [PATCH v5 " Hanumanth Pothula
2022-11-21 13:22         ` Ferruh Yigit
2022-11-21 13:36           ` [EXT] " Hanumanth Reddy Pothula
2022-11-21 14:10             ` Ferruh Yigit
2022-11-21 14:33         ` [PATCH v6 " Hanumanth Pothula
2022-11-21 17:31           ` Ferruh Yigit
2022-11-21 17:45             ` [EXT] " Hanumanth Reddy Pothula
2022-11-21 18:05               ` Ferruh Yigit
2022-11-21 18:07           ` [PATCH v7 " Hanumanth Pothula
2022-11-21 18:40             ` Ferruh Yigit
2022-11-22  6:42             ` Han, YingyaX
2022-11-22  6:52               ` Tang, Yaqi
2022-11-22  8:33                 ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=PH0PR18MB47507E0C8D10D60D1E18115BCB0A9@PH0PR18MB4750.namprd18.prod.outlook.com \
    --to=hpothula@marvell.com \
    --cc=aman.deep.singh@intel.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@amd.com \
    --cc=jerinj@marvell.com \
    --cc=ndabilpuram@marvell.com \
    --cc=thomas@monjalon.net \
    --cc=yux.jiang@intel.com \
    --cc=yuying.zhang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.