From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ferruh Yigit Subject: Re: [PATCH v10 0/6] Support TCP/IPv4, VxLAN, and GRE GSO in DPDK Date: Sun, 8 Oct 2017 04:40:29 +0100 Message-ID: <94a7c45f-9aa6-a94e-61f2-d28cb1c5cce2@intel.com> References: <1507235808-12269-1-git-send-email-mark.b.kavanagh@intel.com> <1507388204-126972-1-git-send-email-jiayu.hu@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Cc: mark.b.kavanagh@intel.com, konstantin.ananyev@intel.com To: Jiayu Hu , dev@dpdk.org Return-path: Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id F3F632BF7 for ; Sun, 8 Oct 2017 05:40:31 +0200 (CEST) In-Reply-To: <1507388204-126972-1-git-send-email-jiayu.hu@intel.com> Content-Language: en-US List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 10/7/2017 3:56 PM, Jiayu Hu wrote: > Generic Segmentation Offload (GSO) is a SW technique to split large > packets into small ones. Akin to TSO, GSO enables applications to > operate on large packets, thus reducing per-packet processing overhead. > > To enable more flexibility to applications, DPDK GSO is implemented > as a standalone library. Applications explicitly use the GSO library > to segment packets. This patch adds GSO support to DPDK for specific > packet types: specifically, TCP/IPv4, VxLAN, and GRE. > > The first patch introduces the GSO API framework. The second patch > adds GSO support for TCP/IPv4 packets (containing an optional VLAN > tag). The third patch adds GSO support for VxLAN packets that contain > outer IPv4, and inner TCP/IPv4 headers (plus optional inner and/or > outer VLAN tags). The fourth patch adds GSO support for GRE packets > that contain outer IPv4, and inner TCP/IPv4 headers (with optional > outer VLAN tag). The fifth patch in the series enables TCP/IPv4, VxLAN, > and GRE GSO in testpmd's checksum forwarding engine. The final patch > in the series adds GSO documentation to the programmer's guide. > > Note that this patch set has dependency on the patch "app/testpmd: enable > the heavyweight mode TCP/IPv4 GRO". > http://dpdk.org/dev/patchwork/patch/29867/ > > Performance Testing > =================== > The performance of TCP/IPv4 GSO on a 10Gbps link is demonstrated using > iperf. Setup for the test is described as follows: > > a. Connect 2 x 10Gbps physical ports (P0, P1), which are in the same > machine, together physically. > b. Launch testpmd with P0 and a vhost-user port, and use csum > forwarding engine with "retry". > c. Select IP and TCP HW checksum calculation for P0; select TCP HW > checksum calculation for vhost-user port. > d. Launch a VM with csum and tso offloading enabled. > e. Run iperf-client on virtio-net port in the VM to send TCP packets. > With enabling csum and tso, the VM can send large TCP/IPv4 packets > (mss is up to 64KB). > f. P1 is assigned to linux kernel and enabled kernel GRO. Run > iperf-server on P1. > > We conduct three iperf tests: > > test-1: enable GSO for P0 in testpmd, and set max GSO segment length > to 1514B. Run four iperf-client in the VM. > test-2: enable TSO for P0 in testpmd, and set TSO segsz to 1514B. Run > four iperf-client in the VM. > test-3: disable GSO and TSO in testpmd. Run two iperf-client in the VM. > > Throughput of the above three tests: > > test-1: 9Gbps > test-2: 9.5Gbps > test-3: 3Mbps > > Functional Testing > ================== > Unlike TCP packets, VMs can't send large VxLAN or GRE packets. The max > length of tunneled packets from VMs is 1514B. So current experiment > method can't be used to measure VxLAN and GRE GSO performance, but simply > test the functionality via setting small GSO segment length (e.g. 500B). > > VxLAN > ----- > To test VxLAN GSO functionality, we use the following setup: > > a. Connect 2 x 10Gbps physical ports (P0, P1), which are in the same > machine, together physically. > b. Launch testpmd with P0 and a vhost-user port, and use csum forwarding > engine with "retry". > c. Testpmd commands: > - csum parse_tunnel on "P0" > - csum parse_tunnel on "vhost-user port" > - csum set outer-ip hw "P0" > - csum set ip hw "P0" > - csum set tcp hw "P0" > - csum set tcp hw "vhost-user port" > - set port "P0" gso on > - set gso segsz 500 > d. Launch a VM with csum and tso offloading enabled. > e. Create a vxlan port for the virtio-net port in the VM. Run iperf-client > on the VxLAN port, so TCP packets are VxLAN encapsulated. However, the > max packet length is 1514B. > f. P1 is assigned to linux kernel and kernel GRO is disabled. Similarly, > create a VxLAN port for P1, and run iperf-server on the VxLAN port. > > In testpmd, we can see the length of all packets sent from P0 is smaller > than or equal to 500B. Additionally, the packets arriving in P1 is > encapsulated and is smaller than or equal to 500B. > > GRE > --- > The same process may be used to test GRE functionality, with the exception that > the tunnel type created for both the guest's virtio-net, and the host's kernel > interfaces is GRE: > `ip tunnel add mode gre remote local ` > > As in the VxLAN testcase, the length of packets sent from P0, and received on > P1, is less than 500B. > <...> > Jiayu Hu (3): > gso: add Generic Segmentation Offload API framework > gso: add TCP/IPv4 GSO support > app/testpmd: enable TCP/IPv4, VxLAN and GRE GSO > > Mark Kavanagh (3): > gso: add VxLAN GSO support > gso: add GRE GSO support > doc: add GSO programmer's guide Series applied to dpdk-next-net/master, thanks.