From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78109C433F5 for ; Fri, 24 Dec 2021 16:46:23 +0000 (UTC) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4DC384068C; Fri, 24 Dec 2021 17:46:22 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mails.dpdk.org (Postfix) with ESMTP id D87F34067B for ; Fri, 24 Dec 2021 17:46:20 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4C78B1FB; Fri, 24 Dec 2021 08:46:20 -0800 (PST) Received: from net-x86-dell-8268.shanghai.arm.com (net-x86-dell-8268.shanghai.arm.com [10.169.210.111]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 00DF23F718; Fri, 24 Dec 2021 08:46:18 -0800 (PST) From: Feifei Wang To: Cc: dev@dpdk.org, nd@arm.com, Feifei Wang Subject: [RFC PATCH v1 0/4] Direct re-arming of buffers on receive side Date: Sat, 25 Dec 2021 00:46:08 +0800 Message-Id: <20211224164613.32569-1-feifei.wang2@arm.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Currently, the transmit side frees the buffers into the lcore cache and the receive side allocates buffers from the lcore cache. The transmit side typically frees 32 buffers resulting in 32*8=256B of stores to lcore cache. The receive side allocates 32 buffers and stores them in the receive side software ring, resulting in 32*8=256B of stores and 256B of load from the lcore cache. This patch proposes a mechanism to avoid freeing to/allocating from the lcore cache. i.e. the receive side will free the buffers from transmit side directly into it's software ring. This will avoid the 256B of loads and stores introduced by the lcore cache. It also frees up the cache lines used by the lcore cache. However, this solution poses several constraint: 1)The receive queue needs to know which transmit queue it should take the buffers from. The application logic decides which transmit port to use to send out the packets. In many use cases the NIC might have a single port ([1], [2], [3]), in which case a given transmit queue is always mapped to a single receive queue (1:1 Rx queue: Tx queue). This is easy to configure. If the NIC has 2 ports (there are several references), then we will have 1:2 (RX queue: TX queue) mapping which is still easy to configure. However, if this is generalized to 'N' ports, the configuration can be long. More over the PMD would have to scan a list of transmit queues to pull the buffers from. 2)The other factor that needs to be considered is 'run-to-completion' vs 'pipeline' models. In the run-to-completion model, the receive side and the transmit side are running on the same lcore serially. In the pipeline model. The receive side and transmit side might be running on different lcores in parallel. This requires locking. This is not supported at this point. 3)Tx and Rx buffers must be from the same mempool. And we also must ensure Tx buffer free number is equal to Rx buffer free number: (txq->tx_rs_thresh == RTE_I40E_RXQ_REARM_THRESH) Thus, 'tx_next_dd' can be updated correctly in direct-rearm mode. This is due to tx_next_dd is a variable to compute tx sw-ring free location. Its value will be one more round than the position where next time free starts. Current status in this RFC: 1)An API is added to allow for mapping a TX queue to a RX queue. Currently it supports 1:1 mapping. 2)The i40e driver is changed to do the direct re-arm of the receive side. 3)L3fwd application is hacked to do the mapping for the following command: one core two flows case: $./examples/dpdk-l3fwd -n 4 -l 1 -a 0001:01:00.0 -a 0001:01:00.1 -- -p 0x3 -P --config='(0,0,1),(1,0,1)' where: Port 0 Rx queue 0 is mapped to Port 1 Tx queue 0 Port 1 Rx queue 0 is mapped to Port 0 Tx queue 0 Testing status: 1)Tested L3fwd with the above command: The testing results for L3fwd are as follows: ------------------------------------------------------------------- N1SDP: Base performance(with this patch) with direct re-arm mode enabled 0% +14.1% Ampere Altra: Base performance(with this patch) with direct re-arm mode enabled 0% +17.1% ------------------------------------------------------------------- This patch can not affect performance of normal mode, and if enable direct-rearm mode, performance can be improved by 14% - 17% in n1sdp and ampera-altra. Feedback requested: 1) Has anyone done any similar experiments, any lessons learnt? 2) Feedback on API Next steps: 1) Update the code for supporting 1:N(Rx : TX) mapping 2) Automate the configuration in L3fwd sample application Reference: [1] https://store.nvidia.com/en-us/networking/store/product/MCX623105AN-CDAT/NVIDIAMCX623105ANCDATConnectX6DxENAdapterCard100GbECryptoDisabled/ [2] https://www.intel.com/content/www/us/en/products/sku/192561/intel-ethernet-network-adapter-e810cqda1/specifications.html [3] https://www.broadcom.com/products/ethernet-connectivity/network-adapters/100gb-nic-ocp/n1100g Feifei Wang (4): net/i40e: enable direct re-arm mode ethdev: add API for direct re-arm mode net/i40e: add direct re-arm mode internal API examples/l3fwd: give an example for direct rearm mode drivers/net/i40e/i40e_ethdev.c | 34 ++++++ drivers/net/i40e/i40e_rxtx.h | 4 + drivers/net/i40e/i40e_rxtx_vec_neon.c | 149 +++++++++++++++++++++++++- examples/l3fwd/main.c | 3 + lib/ethdev/ethdev_driver.h | 15 +++ lib/ethdev/rte_ethdev.c | 14 +++ lib/ethdev/rte_ethdev.h | 31 ++++++ lib/ethdev/version.map | 3 + 8 files changed, 251 insertions(+), 2 deletions(-) -- 2.25.1