From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Ananyev, Konstantin" Subject: Re: [RFC] Add GRO support in DPDK Date: Tue, 24 Jan 2017 10:33:06 +0000 Message-ID: <2601191342CEEE43887BDE71AB9772583F10AEBD@irsmsx105.ger.corp.intel.com> References: <1485176592-111525-1-git-send-email-jiayu.hu@intel.com> <20170123091550.212dca35@xeon-e3> <6B5C6BED-CAD4-4C51-8FB7-8509663B813B@intel.com> <2601191342CEEE43887BDE71AB9772583F10AD94@irsmsx105.ger.corp.intel.com> <1F520FF1-C38B-483B-95E1-FBD4C631E7D2@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Cc: Stephen Hemminger , "Hu, Jiayu" , "dev@dpdk.org" , "Kinsella, Ray" , "Gilmore, Walter E" , "Venkatesan, Venky" , "yuanhan.liu@linux.intel.com" To: "Wiles, Keith" Return-path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id E01F05592 for ; Tue, 24 Jan 2017 11:33:09 +0100 (CET) In-Reply-To: <1F520FF1-C38B-483B-95E1-FBD4C631E7D2@intel.com> Content-Language: en-US List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: Wiles, Keith > Sent: Tuesday, January 24, 2017 5:26 AM > To: Ananyev, Konstantin > Cc: Stephen Hemminger ; Hu, Jiayu ; dev@dpdk.org; Kinsella, Ray > ; Gilmore, Walter E ;= Venkatesan, Venky ; > yuanhan.liu@linux.intel.com > Subject: Re: [dpdk-dev] [RFC] Add GRO support in DPDK >=20 >=20 > > On Jan 23, 2017, at 6:43 PM, Ananyev, Konstantin wrote: > > > > > > > >> -----Original Message----- > >> From: Wiles, Keith > >> Sent: Monday, January 23, 2017 9:53 PM > >> To: Stephen Hemminger > >> Cc: Hu, Jiayu ; dev@dpdk.org; Kinsella, Ray ; Ananyev, Konstantin > >> ; Gilmore, Walter E ; Venkatesan, Venky > ; > >> yuanhan.liu@linux.intel.com > >> Subject: Re: [dpdk-dev] [RFC] Add GRO support in DPDK > >> > >> > >>> On Jan 23, 2017, at 10:15 AM, Stephen Hemminger wrote: > >>> > >>> On Mon, 23 Jan 2017 21:03:12 +0800 > >>> Jiayu Hu wrote: > >>> > >>>> With the support of hardware segmentation techniques in DPDK, the > >>>> networking stack overheads of send-side of applications, which direc= tly > >>>> leverage DPDK, have been greatly reduced. But for receive-side, numb= ers of > >>>> segmented packets seriously burden the networking stack of applicati= ons. > >>>> Generic Receive Offload (GRO) is a widely used method to solve the > >>>> receive-side issue, which gains performance by reducing the amount o= f > >>>> packets processed by the networking stack. But currently, DPDK doesn= 't > >>>> support GRO. Therefore, we propose to add GRO support in DPDK, and t= his > >>>> RFC is used to explain the basic DPDK GRO design. > >>>> > >>>> DPDK GRO is a SW-based packets assembly library, which provides GRO > >>>> abilities for numbers of protocols. In DPDK GRO, packets are merged > >>>> before returning to applications and after receiving from drivers. > >>>> > >>>> In DPDK, GRO is a capability of NIC drivers. That support GRO or not= and > >>>> what GRO types are supported are up to NIC drivers. Different driver= s may > >>>> support different GRO types. By default, drivers enable all supporte= d GRO > >>>> types. For applications, they can inquire the supported GRO types by > >>>> each driver, and can control what GRO types are applied. For example= , > >>>> ixgbe supports TCP and UDP GRO, but the application just needs TCP G= RO. > >>>> The application can disable ixgbe UDP GRO. > >>>> > >>>> To support GRO, a driver should provide a way to tell applications w= hat > >>>> GRO types are supported, and provides a GRO function, which is in ch= arge > >>>> of assembling packets. Since different drivers may support different= GRO > >>>> types, their GRO functions may be different. For applications, they = don't > >>>> need extra operations to enable GRO. But if there are some GRO types= that > >>>> are not needed, applications can use an API, like > >>>> rte_eth_gro_disable_protocols, to disable them. Besides, they can > >>>> re-enable the disabled ones. > >>>> > >>>> The GRO function processes numbers of packets at a time. In each > >>>> invocation, what GRO types are applied depends on applications, and = the > >>>> amount of packets to merge depends on the networking status and > >>>> applications. Specifically, applications determine the maximum numbe= r of > >>>> packets to be processed by the GRO function, but how many packets ar= e > >>>> actually processed depends on if there are available packets to rece= ive. > >>>> For example, the receive-side application asks the GRO function to > >>>> process 64 packets, but the sender only sends 40 packets. At this ti= me, > >>>> the GRO function returns after processing 40 packets. To reassemble = the > >>>> given packets, the GRO function performs an "assembly procedure" on = each > >>>> packet. We use an example to demonstrate this procedure. Supposing t= he > >>>> GRO function is going to process packetX, it will do the following t= wo > >>>> things: > >>>> a. Find a L4 assembly function according to the packet type of > >>>> packetX. A L4 assembly function is in charge of merging packets of = a > >>>> specific type. For example, TCPv4 assembly function merges packets > >>>> whose L3 IPv4 and L4 is TCP. Each L4 assembly function has a packet > >>>> array, which keeps the packets that are unable to assemble. > >>>> Initially, the packet array is empty; > >>>> b. The L4 assembly function traverses own packet array to find a > >>>> mergeable packet (comparing Ethernet, IP and L4 header fields). If > >>>> finds, merges it and packetX via chaining them together; if doesn't= , > >>>> allocates a new array element to store packetX and updates element > >>>> number of the array. > >>>> After performing the assembly procedure to all packets, the GRO func= tion > >>>> combines the results of all packet arrays, and returns these packets= to > >>>> applications. > >>>> > >>>> There are lots of ways to implement the above design in DPDK. One of= the > >>>> ways is: > >>>> a. Drivers tell applications what GRO types are supported via > >>>> dev->dev_ops->dev_infos_get; > >>>> b. When initialize, drivers register own GRO function as a RX > >>>> callback, which is invoked inside rte_eth_rx_burst. The name of the > >>>> GRO function should be like xxx_gro_receive (e.g. ixgbe_gro_receive= ). > >>>> Currently, the RX callback can only process the packets returned by > >>>> dev->rx_pkt_burst each time, and the maximum packet number > >>>> dev->rx_pkt_burst returns is determined by each driver, which can't > >>>> be interfered by applications. Therefore, to implement the above GR= O > >>>> design, we have to modify current RX implementation to make driver > >>>> return packets as many as possible until the packet number meets th= e > >>>> demand of applications or there are not available packets to receiv= e. > >>>> This modification is also proposed in patch: > >>>> http://dpdk.org/ml/archives/dev/2017-January/055887.html; > >>>> c. The GRO types to apply and the maximum number of packets to merg= e > >>>> are passed by resetting RX callback parameters. It can be achieved = by > >>>> invoking rte_eth_rx_callback; > >>>> d. Simply, we can just store packet addresses into the packet array= . > >>>> To check one element, we need to fetch the packet via its address. > >>>> However, this simple design is not efficient enough. Since whenever > >>>> checking one packet, one pointer dereference is generated. And a > >>>> pointer dereference always causes a cache line miss. A better way i= s > >>>> to store some rules in each array element. The rules must be the > >>>> prerequisites of merging two packets, like the sequence number of T= CP > >>>> packets. We first compare the rules, then retrieve the packet if th= e > >>>> rules match. If storing the rules causes the packet array structure > >>>> is cache-unfriendly, we can store a fixed-length signature of the > >>>> rules instead. For example, the signature can be calculated by > >>>> performing XOR operation on IP addresses. Both design can avoid > >>>> unnecessary pointer dereferences. > >>> > >>> > >>> Since DPDK does burst mode already, GRO is a lot less relevant. > >>> GRO in Linux was invented because there is no burst mode in the recei= ve API. > >>> > >>> If you look at VPP in FD.io you will see they already do aggregration= and > >>> steering at the higher level in the stack. > >>> > >>> The point of GRO is that it is generic, no driver changes are necessa= ry. > >>> Your proposal would add a lot of overhead, and cause drivers to have = to > >>> be aware of higher level flows. > >> > >> NACK > >> > >> The design is not super clear to me here and we need to understand the= impact to DPDK, performance and the application. I would like > to > >> have a clean transparent design to the application and as little impac= t on performance as possible. > >> > >> Let discuss this as I am not sure my previous concerns were addressed = in this RFC. > >> > > > > I would agree that design looks overcomplicated and strange: > > If GRO can (and supposed to be) done fully in SW, why do we need to mod= ify PMDs at all, > > why it can't be just a standalone DPDK library that user can use on his= /her convenience? > > I'd suggest to start with some simple and most widespread case (TCP?) a= nd try to implement > > a library for it first: something similar to what we have for ip reasse= mbly. >=20 > The reason this should not be a library the application calls is to allow= for a transparent design for HW and SW support of this feature. Using > the SW version the application should not need to understand (other then = performance) that GRO is being done for this port. >=20 Why is that? Let say we have ip reassembly library that is called explicitly by the appl= ication. I think for L4 grouping we can do the same. After all it is a pure SW feature, so to me it makes sense to allow applica= tion to decide when/where to call it. Again it would allow people to develop/use it without any modifications in = current PMDs. > As I was told the Linux kernel hides this features and make it transparen= t. Yes, but DPDK does a lot things in a different way. So it doesn't look like a compelling reason for me :) Konstantin