From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Ananyev, Konstantin" Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD receive function Date: Wed, 14 Oct 2015 23:23:02 +0000 Message-ID: <2601191342CEEE43887BDE71AB97725836AAFBAB@irsmsx105.ger.corp.intel.com> References: <1441135036-7491-1-git-send-email-zoltan.kiss@linaro.org> <55ED8252.1020900@linaro.org> <59AF69C657FD0841A61C55336867B5B0359227DF@IRSMSX103.ger.corp.intel.com> <55ED9BFD.7040009@linaro.org> <59AF69C657FD0841A61C55336867B5B035922A83@IRSMSX103.ger.corp.intel.com> <56059255.5010507@linaro.org> <2601191342CEEE43887BDE71AB97725836A9CE89@irsmsx105.ger.corp.intel.com> <561E7E83.5000401@linaro.org> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable To: Zoltan Kiss , "Richardson, Bruce" , "dev@dpdk.org" Return-path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 5197D5A59 for ; Thu, 15 Oct 2015 01:23:12 +0200 (CEST) In-Reply-To: <561E7E83.5000401@linaro.org> Content-Language: en-US List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org] > Sent: Wednesday, October 14, 2015 5:11 PM > To: Ananyev, Konstantin; Richardson, Bruce; dev@dpdk.org > Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD receive= function >=20 >=20 >=20 > On 28/09/15 00:19, Ananyev, Konstantin wrote: > > > > > >> -----Original Message----- > >> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org] > >> Sent: Friday, September 25, 2015 7:29 PM > >> To: Richardson, Bruce; dev@dpdk.org > >> Cc: Ananyev, Konstantin > >> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD rece= ive function > >> > >> On 07/09/15 07:41, Richardson, Bruce wrote: > >>> > >>>> -----Original Message----- > >>>> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org] > >>>> Sent: Monday, September 7, 2015 3:15 PM > >>>> To: Richardson, Bruce; dev@dpdk.org > >>>> Cc: Ananyev, Konstantin > >>>> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD re= ceive > >>>> function > >>>> > >>>> > >>>> > >>>> On 07/09/15 13:57, Richardson, Bruce wrote: > >>>>> > >>>>>> -----Original Message----- > >>>>>> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org] > >>>>>> Sent: Monday, September 7, 2015 1:26 PM > >>>>>> To: dev@dpdk.org > >>>>>> Cc: Ananyev, Konstantin; Richardson, Bruce > >>>>>> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD > >>>>>> receive function > >>>>>> > >>>>>> Hi, > >>>>>> > >>>>>> I just realized I've missed the "[PATCH]" tag from the subject. Di= d > >>>>>> anyone had time to review this? > >>>>>> > >>>>> Hi Zoltan, > >>>>> > >>>>> the big thing that concerns me with this is the addition of new > >>>>> instructions for each packet in the fast path. Ideally, this > >>>>> prefetching would be better handled in the application itself, as f= or > >>>>> some apps, e.g. those using pipelining, the core doing the RX from = the > >>>>> NIC may not touch the packet data at all, and the prefetches will > >>>> instead cause a performance slowdown. > >>>>> Is it possible to get the same performance increase - or something > >>>>> close to it - by making changes in OVS? > >>>> OVS already does a prefetch when it's processing the previous packet= , but > >>>> apparently it's not early enough. At least for my test scenario, whe= re I'm > >>>> forwarding UDP packets with the least possible overhead. I guess in = tests > >>>> where OVS does more complex processing it should be fine. > >>>> I'll try to move the prefetch earlier in OVS codebase, but I'm not s= ure if > >>>> it'll help. > >>> I would suggest trying to prefetch more than one packet ahead. Prefet= ching 4 or > >>> 8 ahead might work better, depending on the processing being done. > >> > >> I've moved the prefetch earlier, and it seems to work: > >> > >> https://patchwork.ozlabs.org/patch/519017/ > >> > >> However it raises the question: should we remove header prefetch from > >> all the other drivers, and the CONFIG_RTE_PMD_PACKET_PREFETCH option? > > > > My vote would be for that. > > Konstantin >=20 > After some further thinking I would rather support the > rte_packet_prefetch() macro (prefetch the header in the driver, and > configure it through CONFIG_RTE_PMD_PACKET_PREFETCH) >=20 > - the prefetch can happen earlier, so if an application needs the header > right away, this is the fastest > - if the application doesn't need header prefetch, it can turn it off > compile time. Same as if we wouldn't have this option. > - if the application has mixed needs (sometimes it needs the header > right away, sometimes it doesn't), it can turn it off and do what it > needs. Same as if we wouldn't have this option. >=20 > A harder decision would be whether it should be turned on or off by > default. Currently it's on, and half of the receive functions don't use i= t. Yep, it is sort of a mess right now. Another question if we'd like to keep it and standardise it:=20 at what moment to prefetch: as soon as we realize that HW is done with that= buffer, or as late inside rx_burst() as possible? =20 I suppose there is no clear answer for that. That's why my thought was to just get rid of it. Though if it would be implemented in some standardized way and disabled by = default - that's probably ok too.=20 BTW, couldn't that be implemented just via rte_ethdev rx callbacks mechanis= m? Then we can have the code one place and don't need compile option at all - could be ebabled/disabled dynamically on a per nic basis. Or would it be too high overhead for that? Konstantin =20 >=20 > > > > > >> > >>> > >>> /Bruce > >