From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jiayu Hu Subject: [PATCH v4 0/3] Support TCP/IPv4 GRO in DPDK Date: Wed, 7 Jun 2017 19:08:48 +0800 Message-ID: <1496833731-53653-1-git-send-email-jiayu.hu@intel.com> References: <1493021398-115955-1-git-send-email-jiayu.hu@intel.com> Cc: konstantin.ananyev@intel.com, keith.wiles@intel.com, yliu@fridaylinux.org, jianfeng.tan@intel.com, tiwei.bie@intel.com, Jiayu Hu To: dev@dpdk.org Return-path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id BD1A52BA1 for ; Wed, 7 Jun 2017 13:07:54 +0200 (CEST) In-Reply-To: <1493021398-115955-1-git-send-email-jiayu.hu@intel.com> List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Generic Receive Offload (GRO) is a widely used SW-based offloading technique to reduce per-packet processing overhead. It gains performance by reassembling small packets into large ones. Therefore, we propose to support GRO in DPDK. To enable more flexibility to applications, DPDK GRO is implemented as a user library. Applications explicitly use the GRO library to merge small packets into large ones. DPDK GRO provides two reassembly modes. One is called lightweigth mode, the other is called heavyweight mode. If applications want merge packets in a simple way, they can use lightweigth mode. If applications need more fine-grained controls, they can choose heavyweigth mode. This patchset is to support TCP/IPv4 GRO in DPDK. The first patch is to provide a GRO API framework. The second patch is to support TCP/IPv4 GRO. The last patch demonstrates how to use GRO library in app/testpmd. We perform two iperf tests (with DPDK GRO and without DPDK GRO) to see the performance gains from DPDK GRO. Specifically, the experiment environment is: a. Two 10Gbps physical ports (p0 and p1) on one host are linked together; b. p0 is in networking namespace ns1, whose IP is 1.1.2.3. Iperf client runs on p0, which sends TCP/IPv4 packets. The OS in VM is ubuntu 14.04; c. testpmd runs on p1. Besides, testpmd has a vdev which connects to a VM via vhost-user and virtio-net. The VM runs iperf server, whose IP is 1.1.2.4; d. p0 turns on TSO; VM turns off kernel GRO; testpmd runs in iofwd mode. iperf client and server use the following commands: - client: ip netns exec ns1 iperf -c 1.1.2.4 -i2 -t 60 -f g -m - server: iperf -s -f g Two test cases are: a. w/o DPDK GRO: run testpmd without GRO b. w DPDK GRO: testpmd enables GRO for p1 Result: With GRO, the throughput improvement is around 40%. Change log ========== v4: - implement DPDK GRO as an application-used library - introduce lightweight and heavyweight working modes to enable fine-grained controls to applications - replace cuckoo hash tables with simpler table structure v3: - fix compilation issues. v2: - provide generic reassembly function; - implement GRO as a device ability: add APIs for devices to support GRO; add APIs for applications to enable/disable GRO; - update testpmd example. Jiayu Hu (3): lib: add Generic Receive Offload API framework lib/gro: add TCP/IPv4 GRO support app/testpmd: enable TCP/IPv4 GRO app/test-pmd/cmdline.c | 45 ++++ app/test-pmd/config.c | 29 +++ app/test-pmd/iofwd.c | 6 + app/test-pmd/testpmd.c | 3 + app/test-pmd/testpmd.h | 11 + config/common_base | 5 + lib/Makefile | 1 + lib/librte_gro/Makefile | 51 +++++ lib/librte_gro/rte_gro.c | 243 +++++++++++++++++++++ lib/librte_gro/rte_gro.h | 216 ++++++++++++++++++ lib/librte_gro/rte_gro_tcp.c | 509 +++++++++++++++++++++++++++++++++++++++++++ lib/librte_gro/rte_gro_tcp.h | 206 +++++++++++++++++ mk/rte.app.mk | 1 + 13 files changed, 1326 insertions(+) create mode 100644 lib/librte_gro/Makefile create mode 100644 lib/librte_gro/rte_gro.c create mode 100644 lib/librte_gro/rte_gro.h create mode 100644 lib/librte_gro/rte_gro_tcp.c create mode 100644 lib/librte_gro/rte_gro_tcp.h -- 2.7.4