From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Ananyev, Konstantin" Subject: Re: [PATCH 0/5] Support TCP/IPv4, VxLAN and GRE GSO in DPDK Date: Wed, 30 Aug 2017 10:49:15 +0000 Message-ID: <2601191342CEEE43887BDE71AB9772584F23E240@IRSMSX103.ger.corp.intel.com> References: <1503584144-63181-1-git-send-email-jiayu.hu@intel.com> <2601191342CEEE43887BDE71AB9772584F23E07B@IRSMSX103.ger.corp.intel.com> <20170830073656.GA79301@dpdk15.sh.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Cc: "dev@dpdk.org" , "Kavanagh, Mark B" , "Tan, Jianfeng" To: "Hu, Jiayu" Return-path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id EEA2629D6 for ; Wed, 30 Aug 2017 12:49:19 +0200 (CEST) In-Reply-To: <20170830073656.GA79301@dpdk15.sh.intel.com> Content-Language: en-US List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: Hu, Jiayu > Sent: Wednesday, August 30, 2017 8:37 AM > To: Ananyev, Konstantin > Cc: dev@dpdk.org; Kavanagh, Mark B ; Tan, Jian= feng > Subject: Re: [PATCH 0/5] Support TCP/IPv4, VxLAN and GRE GSO in DPDK >=20 > Hi Konstantin, >=20 > Thanks for your suggestions. Feedbacks are inline. >=20 > Thanks, > Jiayu >=20 > On Wed, Aug 30, 2017 at 09:37:42AM +0800, Ananyev, Konstantin wrote: > > > > Hi Jiayu, > > Few questions/comments from me below in in next few mails. > > Thanks > > Konstantin > > > > > > > > Generic Segmentation Offload (GSO) is a SW technique to split large > > > packets into small ones. Akin to TSO, GSO enables applications to > > > operate on large packets, thus reducing per-packet processing overhea= d. > > > > > > To enable more flexibility to applications, DPDK GSO is implemented > > > as a standalone library. Applications explicitly use the GSO library > > > to segment packets. This patch adds GSO support to DPDK for specific > > > packet types: specifically, TCP/IPv4, VxLAN, and GRE. > > > > > > The first patch introduces the GSO API framework. The second patch > > > adds GSO support for TCP/IPv4 packets (containing an optional VLAN > > > tag). The third patch adds GSO support for VxLAN packets that contain > > > outer IPv4, and inner TCP/IPv4 headers (plus optional inner and/or > > > outer VLAN tags). The fourth patch adds GSO support for GRE packets > > > that contain outer IPv4, and inner TCP/IPv4 headers (with optional > > > outer VLAN tag). The last patch in the series enables TCP/IPv4, VxLAN= , > > > and GRE GSO in testpmd's checksum forwarding engine. > > > > > > The performance of TCP/IPv4 GSO on a 10Gbps link is demonstrated usin= g > > > iperf. Setup for the test is described as follows: > > > > > > a. Connect 2 x 10Gbps physical ports (P0, P1), together physically. > > > b. Launch testpmd with P0 and a vhost-user port, and use csum > > > forwarding engine. > > > c. Select IP and TCP HW checksum calculation for P0; select TCP HW > > > checksum calculation for vhost-user port. > > > d. Launch a VM with csum and tso offloading enabled. > > > e. Run iperf-client on virtio-net port in the VM to send TCP packets. > > > > Not sure I understand the setup correctly: > > So testpmd forwards packets between P0 and vhost-user port, right? >=20 > Yes. >=20 > > And who uses P1? iperf-server over linux kernel? >=20 > P1 is possessed by linux kernel. >=20 > > Also is P1 on another box or not? >=20 > P0 and P1 are in the same machine and are connected physically. >=20 > > > > > > > > With GSO enabled for P0 in testpmd, observed iperf throughput is ~9Gb= ps. > > > > Ok, and if GSO is disabled what is the throughput? > > Another stupid question: if P0 is physical 10G (ixgbe?) we can just ena= ble a TSO on it, right? > > If so, what would be the TSO numbers here? >=20 > Here are more detailed experiment information: >=20 > test1: only enable GSO for p0, GSO size is 1518, use two iperf-clients (i= .e. "-P 2") > test2: only enable TSO for p0, TSO size is 1518, use two iperf-clients > test3: disable TSO and GSO, use two iperf-clients >=20 > test1 performance: 8.6Gpbs > test2 throughput: 9.5Gbps > test3 throughput: 3Mbps Ok thanks for detailed explanation. I' d suggest you put it into next version cover letter.=20 >=20 > > > > In fact, could you probably explain a bit more, what supposed to be a m= ain usage model for that library? >=20 > The GSO library is just a SW segmentation method, which can be used by ap= plications, like OVS. > Currently, most of NICs supports to segment TCP and UDP packets, but not = for all NICs. So current > OVS doesn't enable TSO, as a result of lacking a SW segmentation fallback= . Besides, the protocol > types in HW segmentation are limited. So it's necessary to provide a SW s= egmentation solution. >=20 > With the GSO library, OVS and other applications are able to receive larg= e packets from VMs and > process these large packets, instead of standard ones (i.e. 1518B). So th= e per-packet overhead is > reduced, since the number of packets needed processing is much fewer. Ok, just for my curiosity what is the size of the packets coming from VM? Konstantin >=20 > > Is that to perform segmentation on (virtual) devices that doesn't suppo= rt HW TSO or ...? >=20 > When launch qemu with enabling TSO or GSO, the virtual device doesn't rea= lly do segmentation. > It directly sends large packets. Therefore, testpmd can receive large pac= kets from the VM and > then perform GSO. The GSO/TSO behavior of virtual devices is different fr= om physical NICs. >=20 > > Again would it be for a termination point (packets were just formed and= filled) by the caller, > > or is that for box in the middle which just forwards packets between no= des? > > If the later one, then we'll probably already have most of our packets = segmented properly, no? > > > > > The experimental data of VxLAN and GRE will be shown later. > > > > > > Jiayu Hu (3): > > > lib: add Generic Segmentation Offload API framework > > > gso/lib: add TCP/IPv4 GSO support > > > app/testpmd: enable TCP/IPv4, VxLAN and GRE GSO > > > > > > Mark Kavanagh (2): > > > lib/gso: add VxLAN GSO support > > > lib/gso: add GRE GSO support > > > > > > app/test-pmd/cmdline.c | 121 +++++++++ > > > app/test-pmd/config.c | 25 ++ > > > app/test-pmd/csumonly.c | 68 ++++- > > > app/test-pmd/testpmd.c | 9 + > > > app/test-pmd/testpmd.h | 10 + > > > config/common_base | 5 + > > > lib/Makefile | 2 + > > > lib/librte_eal/common/include/rte_log.h | 1 + > > > lib/librte_gso/Makefile | 52 ++++ > > > lib/librte_gso/gso_common.c | 431 ++++++++++++++++++++++= ++++++++++ > > > lib/librte_gso/gso_common.h | 180 +++++++++++++ > > > lib/librte_gso/gso_tcp.c | 82 ++++++ > > > lib/librte_gso/gso_tcp.h | 73 ++++++ > > > lib/librte_gso/gso_tunnel.c | 62 +++++ > > > lib/librte_gso/gso_tunnel.h | 46 ++++ > > > lib/librte_gso/rte_gso.c | 100 ++++++++ > > > lib/librte_gso/rte_gso.h | 122 +++++++++ > > > lib/librte_gso/rte_gso_version.map | 7 + > > > mk/rte.app.mk | 1 + > > > 19 files changed, 1392 insertions(+), 5 deletions(-) > > > create mode 100644 lib/librte_gso/Makefile > > > create mode 100644 lib/librte_gso/gso_common.c > > > create mode 100644 lib/librte_gso/gso_common.h > > > create mode 100644 lib/librte_gso/gso_tcp.c > > > create mode 100644 lib/librte_gso/gso_tcp.h > > > create mode 100644 lib/librte_gso/gso_tunnel.c > > > create mode 100644 lib/librte_gso/gso_tunnel.h > > > create mode 100644 lib/librte_gso/rte_gso.c > > > create mode 100644 lib/librte_gso/rte_gso.h > > > create mode 100644 lib/librte_gso/rte_gso_version.map > > > > > > -- > > > 2.7.4