From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-eopbgr30083.outbound.protection.outlook.com ([40.107.3.83]:2176 "EHLO EUR03-AM5-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752246AbeCLKId (ORCPT ); Mon, 12 Mar 2018 06:08:33 -0400 Subject: Re: [bpf-next V3 PATCH 13/15] mlx5: use page_pool for xdp_return_frame call To: Jesper Dangaard Brouer , netdev@vger.kernel.org, =?UTF-8?B?QmrDtnJuVMO2cGVs?= , magnus.karlsson@intel.com Cc: eugenia@mellanox.com, Jason Wang , John Fastabend , Eran Ben Elisha , Saeed Mahameed , galp@mellanox.com, Daniel Borkmann , Alexei Starovoitov References: <152062887576.27458.8590966896888512270.stgit@firesoul> <152062896782.27458.5542026179434739900.stgit@firesoul> From: Tariq Toukan Message-ID: <247192bd-761e-426c-c462-59efe3b7ca97@mellanox.com> Date: Mon, 12 Mar 2018 12:08:24 +0200 MIME-Version: 1.0 In-Reply-To: <152062896782.27458.5542026179434739900.stgit@firesoul> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: netdev-owner@vger.kernel.org List-ID: On 09/03/2018 10:56 PM, Jesper Dangaard Brouer wrote: > This patch shows how it is possible to have both the driver local page > cache, which uses elevated refcnt for "catching"/avoiding SKB > put_page. And at the same time, have pages getting returned to the > page_pool from ndp_xdp_xmit DMA completion. > > Performance is surprisingly good. Tested DMA-TX completion on ixgbe, > that calls "xdp_return_frame", which call page_pool_put_page(). > Stats show DMA-TX-completion runs on CPU#9 and mlx5 RX runs on CPU#5. > (Internally page_pool uses ptr_ring, which is what gives the good > cross CPU performance). > > Show adapter(s) (ixgbe2 mlx5p2) statistics (ONLY that changed!) > Ethtool(ixgbe2 ) stat: 732863573 ( 732,863,573) <= tx_bytes /sec > Ethtool(ixgbe2 ) stat: 781724427 ( 781,724,427) <= tx_bytes_nic /sec > Ethtool(ixgbe2 ) stat: 12214393 ( 12,214,393) <= tx_packets /sec > Ethtool(ixgbe2 ) stat: 12214435 ( 12,214,435) <= tx_pkts_nic /sec > Ethtool(mlx5p2 ) stat: 12211786 ( 12,211,786) <= rx3_cache_empty /sec > Ethtool(mlx5p2 ) stat: 36506736 ( 36,506,736) <= rx_64_bytes_phy /sec > Ethtool(mlx5p2 ) stat: 2336430575 ( 2,336,430,575) <= rx_bytes_phy /sec > Ethtool(mlx5p2 ) stat: 12211786 ( 12,211,786) <= rx_cache_empty /sec > Ethtool(mlx5p2 ) stat: 22823073 ( 22,823,073) <= rx_discards_phy /sec > Ethtool(mlx5p2 ) stat: 1471860 ( 1,471,860) <= rx_out_of_buffer /sec > Ethtool(mlx5p2 ) stat: 36506715 ( 36,506,715) <= rx_packets_phy /sec > Ethtool(mlx5p2 ) stat: 2336542282 ( 2,336,542,282) <= rx_prio0_bytes /sec > Ethtool(mlx5p2 ) stat: 13683921 ( 13,683,921) <= rx_prio0_packets /sec > Ethtool(mlx5p2 ) stat: 821015537 ( 821,015,537) <= rx_vport_unicast_bytes /sec > Ethtool(mlx5p2 ) stat: 13683608 ( 13,683,608) <= rx_vport_unicast_packets /sec > > Before this patch: single flow performance was 6Mpps, and if I started > two flows the collective performance drop to 4Mpps, because we hit the > page allocator lock (further negative scaling occurs). > > V2: Adjustments requested by Tariq > - Changed page_pool_create return codes not return NULL, only > ERR_PTR, as this simplifies err handling in drivers. > - Save a branch in mlx5e_page_release > - Correct page_pool size calc for MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ > > Signed-off-by: Jesper Dangaard Brouer > --- I am running perf tests with your series. I sense a drastic degradation in regular TCP flows, I'm double checking the numbers now...