dev.dpdk.org archive mirror
 help / color / mirror / Atom feed
From: Matan Azrad <matan@mellanox.com>
To: Ferruh Yigit <ferruh.yigit@intel.com>,
	Dekel Peled <dekelp@mellanox.com>,
	 "john.mcnamara@intel.com" <john.mcnamara@intel.com>,
	"marko.kovacevic@intel.com" <marko.kovacevic@intel.com>,
	"nhorman@tuxdriver.com" <nhorman@tuxdriver.com>,
	"ajit.khaparde@broadcom.com" <ajit.khaparde@broadcom.com>,
	"somnath.kotur@broadcom.com" <somnath.kotur@broadcom.com>,
	"anatoly.burakov@intel.com" <anatoly.burakov@intel.com>,
	"xuanziyang2@huawei.com" <xuanziyang2@huawei.com>,
	"cloud.wangxiaoyun@huawei.com" <cloud.wangxiaoyun@huawei.com>,
	"zhouguoyang@huawei.com" <zhouguoyang@huawei.com>,
	"wenzhuo.lu@intel.com" <wenzhuo.lu@intel.com>,
	"konstantin.ananyev@intel.com" <konstantin.ananyev@intel.com>,
	Shahaf Shuler <shahafs@mellanox.com>,
	Slava Ovsiienko <viacheslavo@mellanox.com>,
	"rmody@marvell.com" <rmody@marvell.com>,
	"shshaikh@marvell.com" <shshaikh@marvell.com>,
	"maxime.coquelin@redhat.com" <maxime.coquelin@redhat.com>,
	"tiwei.bie@intel.com" <tiwei.bie@intel.com>,
	"zhihong.wang@intel.com" <zhihong.wang@intel.com>,
	"yongwang@vmware.com" <yongwang@vmware.com>,
	Thomas Monjalon <thomas@monjalon.net>,
	"arybchenko@solarflare.com" <arybchenko@solarflare.com>,
	"jingjing.wu@intel.com" <jingjing.wu@intel.com>,
	"bernard.iremonger@intel.com" <bernard.iremonger@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
Date: Mon, 11 Nov 2019 11:33:23 +0000	[thread overview]
Message-ID: <AM0PR0502MB4019A6C50FF14AD23D12B1B4D2740@AM0PR0502MB4019.eurprd05.prod.outlook.com> (raw)
In-Reply-To: <b2d5936b-a3cf-8ee7-b1ae-e6b58b82c1be@intel.com>



From: Ferruh Yigit
> On 11/9/2019 6:20 PM, Matan Azrad wrote:
> > Hi
> >
> > From: Ferruh Yigit
> >> On 11/8/2019 11:56 AM, Matan Azrad wrote:
> >>>
> >>>
> >>> From: Ferruh Yigit
> >>>> On 11/8/2019 10:10 AM, Matan Azrad wrote:
> >>>>>
> >>>>>
> >>>>> From: Ferruh Yigit
> >>>>>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
> >>>>>>> Hi
> >>>>>>>
> >>>>>>> From: Ferruh Yigit
> >>>>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> >>>>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> >>>>>>>>>
> >>>>>>>> 	RTE_ETHER_MAX_LEN;
> >>>>>>>>>  	}
> >>>>>>>>>
> >>>>>>>>> +	/*
> >>>>>>>>> +	 * If LRO is enabled, check that the maximum aggregated
> >>>> packet
> >>>>>>>>> +	 * size is supported by the configured device.
> >>>>>>>>> +	 */
> >>>>>>>>> +	if (dev_conf->rxmode.offloads &
> >>>> DEV_RX_OFFLOAD_TCP_LRO) {
> >>>>>>>>> +		ret = check_lro_pkt_size(
> >>>>>>>>> +				port_id, dev_conf-
> >>>>>>>>> rxmode.max_lro_pkt_size,
> >>>>>>>>> +				dev_info.max_lro_pkt_size);
> >>>>>>>>> +		if (ret != 0)
> >>>>>>>>> +			goto rollback;
> >>>>>>>>> +	}
> >>>>>>>>> +
> >>>>>>>>
> >>>>>>>> This check forces applications that enable LRO to provide
> >>>>>> 'max_lro_pkt_size'
> >>>>>>>> config value.
> >>>>>>>
> >>>>>>> Yes.(we can break an API, we noticed it)
> >>>>>>
> >>>>>> I am not talking about API/ABI breakage, that part is OK.
> >>>>>> With this check, if the application requested LRO offload but not
> >>>>>> provided 'max_lro_pkt_size' value, device configuration will fail.
> >>>>>>
> >>>>> Yes
> >>>>>> Can there be a case application is good with whatever the PMD can
> >>>>>> support as max?
> >>>>> Yes can be - you know, we can do everything we want but it is
> >>>>> better to be
> >>>> consistent:
> >>>>> Due to the fact of Max rx pkt len field is mandatory for JUMBO
> >>>>> offload, max
> >>>> lro pkt len should be mandatory for LRO offload.
> >>>>>
> >>>>> So your question is actually why both, non-lro packets and LRO
> >>>>> packets max
> >>>> size are mandatory...
> >>>>>
> >>>>>
> >>>>> I think it should be important values for net applications management.
> >>>>> Also good for mbuf size managements.
> >>>>>
> >>>>>>>
> >>>>>>>> - Why it is mandatory now, how it was working before if it is
> >>>>>>>> mandatory value?
> >>>>>>>
> >>>>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo
> >>>>>>> frame
> >>>>>> offload.
> >>>>>>> So now, when the user configures a LRO offload he must to set
> >>>>>>> max lro pkt
> >>>>>> len.
> >>>>>>> We don't want to confuse the user here with the max rx pkt len
> >>>>>> configurations and behaviors, they should be with same logic.
> >>>>>>>
> >>>>>>> This parameter defines well the LRO behavior.
> >>>>>>> Before this, each PMD took its own interpretation to what should
> >>>>>>> be the
> >>>>>> maximum size for LRO aggregated packets.
> >>>>>>> Now, the user must say what is his intension, and the ethdev can
> >>>>>>> limit it
> >>>>>> according to the device capability.
> >>>>>>> By this way, also, the PMD can organize\optimize its data-path
> more.
> >>>>>>> Also, the application can create different mempools for LRO
> >>>>>>> queues to
> >>>>>> allow bigger packet receiving for LRO traffic.
> >>>>>>>
> >>>>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it
> >>>>>>>> is
> >> '0'?
> >>>>>>> Yes, you can see the feature description Dekel added.
> >>>>>>> This patch also updates all the PMDs support an LRO for non-0
> value.
> >>>>>>
> >>>>>> Of course I can see the updates Matan, my point is "What happens
> >>>>>> if PMD doesn't provide 'max_lro_pkt_size'",
> >>>>>> 1) There is no check for it right, so it is acceptable?
> >>>>>
> >>>>> There is check.
> >>>>> If the capability is 0, any non-zero configuration will fail.
> >>>>>
> >>>>>> 2) Are we making this filed mandatory to provide for PMDs, it is
> >>>>>> easy to make new fields mandatory for PMDs but is this really
> >> necessary?
> >>>>>
> >>>>> Yes, for consistence.
> >>>>>
> >>>>>>>
> >>>>>>> as same as max rx pkt len, no?
> >>>>>>>
> >>>>>>>> - What do you think setting 'max_lro_pkt_size' config value to
> >>>>>>>> what PMD provided if application doesn't provide it?
> >>>>>>> Same answers as above.
> >>>>>>>
> >>>>>>
> >>>>>> If application doesn't care the value, as it has been till now,
> >>>>>> and not provided explicit 'max_lro_pkt_size', why not ethdev
> >>>>>> level use the value provided by PMD instead of failing?
> >>>>>
> >>>>> Again, same question we can ask on max rx pkt len.
> >>>>>
> >>>>> Looks like the packet size is very important value which should be
> >>>>> set by
> >>>> the application.
> >>>>>
> >>>>> Previous applications have no option to configure it, so they
> >>>>> haven't
> >>>> configure it, (probably cover it somehow) I think it is our miss to
> >>>> supply this info.
> >>>>>
> >>>>> Let's do it in same way as we do max rx pkt len (as this patch main
> idea).
> >>>>> Later, we can change both to other meaning.
> >>>>>
> >>>>
> >>>> I think it is not a good reason to introduce a new mandatory config
> >>>> option for application because of 'max_rx_pkt_len' does it.
> >>>
> >>> It is mandatory only if LRO offload is configured.
> >>>
> >>>> Will it work, if:
> >>>> - If application doesn't provide this value, use the PMD max
> >>>
> >>> May cause a problem if the mbuf size is not enough for the PMD
> maximum.
> >>
> >> OK, this is what I was missing, for this case I was thinking
> >> max_rx_pkt_len will be used but you already explained that
> >> application may want to use different mempools for LRO queues.
> >>
> > So , are you agree with the idea?
> >
> >> For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size' into
> >> account and program the device accordingly (of course in LRO enabled
> >> case) ?
> >> This part seems missing and should be highlighted to other PMD
> maintainers.
> >
> >
> > Yes, you are right.
> > PMDs must limit the LRO aggregated packet according to the new field,
> > And it probably very hard for the patch introducer to understand how to do
> it for each PMD.
> >
> > I think each new configuration requires other maintainers\developers to
> adjust their own PMD code to the new configuration and it should be done in
> limited time.
> 
> Agree.
> But experience showed that this synchronization is not as easy as it sounds,
> whoever changing the interface/library says other PMDs should reflect the
> change but most of the times other PMD maintainers not aware of it or if
> they do they have other priorities for the release, so the changes should be
> in a way to give more time to PMDs to adapt it and during this time library
> change shouldn't break other PMDs.
> 

Yes.

> > My suggestion here:
> > 1. To reserve the info field and the configuration field for rc2.(if
> > it is critical not to break ABI for rc3) 2. To merge the ethdev patch in the
> start of rc3.
> > 3. Request each relevant PMD to adjust its PMD to the new configuration
> for the end of rc3.
> > 	Note: this should be small change and only for ~5 PMDs:
> > 		a. Introduce the info field according to the device ability.
> > 		b. For each LRO queue:
> > 			Use the LRO max size configuration instead of the
> current max rx pkt len configuration(looks like small condition).
> >
> > What do you think?
> 
> There is already a v6 which only updates dev_info fields to have the
> 'max_lro_pktlen' field, the PMD updates there also looks safe, so I think we
> can go with it for rc2.
> 

Doesn’t make sense to expose the info field without the configuration.


> For the configuration part, I suggest deferring it next release, which gives
> more time for discussion and enough time for other PMDs to implement it.
> 
> 
> And related configuration, right now devices already configured to limit the
> packet size to 'max_rx_pkt_len', it can be an optimization to increase it to
> 'max_lro_pkt_len' for the queues LRO is supported, why not make this
> configuration more explicitly with specific API as Konstantin suggested [1],
> this way it only affects the applications that are interested in and the PMDs
> that want to support this.
> Current implementation is under 'rte_eth_dev_configure()' which is used by
> all DPDK applications and impact of changing it is much larger, also it makes
> mandatory for applications to provide this config option when LRO enabled,
> explicit API gives same result without making a mandatory config option.
> 
> [1]
> int rte_eth_dev_set_max_lro(uint16_t port_id, uint32_t lro);

Please see my answers to Konstantin regarding this topic.



One more option:
In order to not break PMDs because of this feature:
0 in the capability field means, The PMD doesn't support LRO special limitation so if the application configuration is not the same like max_rx_pkt_len the validation will fail.







  reply	other threads:[~2019-11-11 11:33 UTC|newest]

Thread overview: 79+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-05  8:40 [dpdk-dev] [PATCH 0/3] support API to set max LRO packet size Dekel Peled
2019-11-05  8:40 ` [dpdk-dev] [PATCH 1/3] ethdev: " Dekel Peled
2019-11-05 12:39   ` Andrew Rybchenko
2019-11-05 13:09     ` Thomas Monjalon
2019-11-05 14:18     ` Dekel Peled
2019-11-05 14:27       ` Andrew Rybchenko
2019-11-05 14:51         ` Dekel Peled
2019-11-05  8:40 ` [dpdk-dev] [PATCH 2/3] net/mlx5: use " Dekel Peled
2019-11-05  8:40 ` [dpdk-dev] [PATCH 3/3] app/testpmd: " Dekel Peled
2019-11-05  9:35 ` [dpdk-dev] [PATCH 0/3] support " Matan Azrad
2019-11-06 11:34 ` [dpdk-dev] [PATCH v2 " Dekel Peled
2019-11-06 11:34   ` [dpdk-dev] [PATCH v2 1/3] ethdev: " Dekel Peled
2019-11-06 12:26     ` Thomas Monjalon
2019-11-06 12:39       ` Dekel Peled
2019-11-06 11:34   ` [dpdk-dev] [PATCH v2 2/3] net/mlx5: use " Dekel Peled
2019-11-06 11:34   ` [dpdk-dev] [PATCH v2 3/3] app/testpmd: " Dekel Peled
2019-11-06 12:35     ` Iremonger, Bernard
2019-11-06 13:14       ` Dekel Peled
2019-11-06 14:28   ` [dpdk-dev] [PATCH v3 0/3] support " Dekel Peled
2019-11-06 14:28     ` [dpdk-dev] [PATCH v3 1/3] ethdev: " Dekel Peled
2019-11-07 11:57       ` [dpdk-dev] [EXT] " Shahed Shaikh
2019-11-07 12:18         ` Dekel Peled
2019-11-06 14:28     ` [dpdk-dev] [PATCH v3 2/3] net/mlx5: use " Dekel Peled
2019-11-06 14:28     ` [dpdk-dev] [PATCH v3 3/3] app/testpmd: " Dekel Peled
2019-11-06 16:41     ` [dpdk-dev] [PATCH v3 0/3] support " Iremonger, Bernard
2019-11-07  6:10       ` Dekel Peled
2019-11-07 12:35     ` [dpdk-dev] [PATCH v4 " Dekel Peled
2019-11-07 12:35       ` [dpdk-dev] [PATCH v4 1/3] ethdev: " Dekel Peled
2019-11-07 20:15         ` Ferruh Yigit
2019-11-08  6:54           ` Matan Azrad
2019-11-08  9:19             ` Ferruh Yigit
2019-11-08 10:10               ` Matan Azrad
2019-11-08 11:37                 ` Ferruh Yigit
2019-11-08 11:56                   ` Matan Azrad
2019-11-08 12:51                     ` Ferruh Yigit
2019-11-08 16:11                       ` Dekel Peled
2019-11-08 16:53                         ` Ferruh Yigit
2019-11-09 18:20                       ` Matan Azrad
2019-11-10 23:40                         ` Ananyev, Konstantin
2019-11-11  8:01                           ` Matan Azrad
2019-11-12 18:31                             ` Ananyev, Konstantin
2019-11-11 11:15                         ` Ferruh Yigit
2019-11-11 11:33                           ` Matan Azrad [this message]
2019-11-11 12:21                             ` Ferruh Yigit
2019-11-11 13:32                               ` Matan Azrad
2019-11-08 13:11                     ` Ananyev, Konstantin
2019-11-08 14:10                       ` Dekel Peled
2019-11-08 14:52                         ` Ananyev, Konstantin
2019-11-08 16:08                           ` Dekel Peled
2019-11-08 16:28                             ` Ananyev, Konstantin
2019-11-09 18:26                               ` Matan Azrad
2019-11-10 22:51                                 ` Ananyev, Konstantin
2019-11-11  6:53                                   ` Matan Azrad
2019-11-07 12:35       ` [dpdk-dev] [PATCH v4 2/3] net/mlx5: use " Dekel Peled
2019-11-08  9:12         ` Slava Ovsiienko
2019-11-08  9:23           ` Ferruh Yigit
2019-11-07 12:35       ` [dpdk-dev] [PATCH v4 3/3] app/testpmd: " Dekel Peled
2019-11-07 14:20         ` Iremonger, Bernard
2019-11-07 20:25         ` Ferruh Yigit
2019-11-08  6:56           ` Matan Azrad
2019-11-08 13:58           ` Dekel Peled
2019-11-08  6:28       ` [dpdk-dev] [PATCH v4 0/3] support " Matan Azrad
2019-11-08 16:42       ` [dpdk-dev] [PATCH v5 " Dekel Peled
2019-11-08 16:42         ` [dpdk-dev] [PATCH v5 1/3] ethdev: " Dekel Peled
2019-11-10 23:07           ` Ananyev, Konstantin
2019-11-11  7:40             ` Dekel Peled
2019-11-08 16:42         ` [dpdk-dev] [PATCH v5 2/3] net/mlx5: use " Dekel Peled
2019-11-08 16:42         ` [dpdk-dev] [PATCH v5 3/3] app/testpmd: " Dekel Peled
2019-11-10 23:11           ` Ananyev, Konstantin
2019-11-11  7:40             ` Dekel Peled
2019-11-08 23:07 ` [dpdk-dev] [PATCH v6] ethdev: add " Thomas Monjalon
2019-11-10 22:47   ` Ananyev, Konstantin
2019-11-11 17:47   ` [dpdk-dev] [PATCH v7 0/3] support API to set " Dekel Peled
2019-11-11 17:47     ` [dpdk-dev] [PATCH v7 1/3] ethdev: " Dekel Peled
2019-11-12  0:46       ` Ferruh Yigit
2019-11-11 17:47     ` [dpdk-dev] [PATCH v7 2/3] net/mlx5: use " Dekel Peled
2019-11-11 17:47     ` [dpdk-dev] [PATCH v7 3/3] app/testpmd: " Dekel Peled
2019-11-12  0:46       ` Ferruh Yigit
2019-11-12  0:47     ` [dpdk-dev] [PATCH v7 0/3] support " Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AM0PR0502MB4019A6C50FF14AD23D12B1B4D2740@AM0PR0502MB4019.eurprd05.prod.outlook.com \
    --to=matan@mellanox.com \
    --cc=ajit.khaparde@broadcom.com \
    --cc=anatoly.burakov@intel.com \
    --cc=arybchenko@solarflare.com \
    --cc=bernard.iremonger@intel.com \
    --cc=cloud.wangxiaoyun@huawei.com \
    --cc=dekelp@mellanox.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=jingjing.wu@intel.com \
    --cc=john.mcnamara@intel.com \
    --cc=konstantin.ananyev@intel.com \
    --cc=marko.kovacevic@intel.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=nhorman@tuxdriver.com \
    --cc=rmody@marvell.com \
    --cc=shahafs@mellanox.com \
    --cc=shshaikh@marvell.com \
    --cc=somnath.kotur@broadcom.com \
    --cc=thomas@monjalon.net \
    --cc=tiwei.bie@intel.com \
    --cc=viacheslavo@mellanox.com \
    --cc=wenzhuo.lu@intel.com \
    --cc=xuanziyang2@huawei.com \
    --cc=yongwang@vmware.com \
    --cc=zhihong.wang@intel.com \
    --cc=zhouguoyang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).