From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D4F3C433B4 for ; Thu, 13 May 2021 17:00:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 33E6161443 for ; Thu, 13 May 2021 17:00:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 33E6161443 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C0CFF940008; Thu, 13 May 2021 13:00:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BE3456B0071; Thu, 13 May 2021 13:00:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AAC3A940008; Thu, 13 May 2021 13:00:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0137.hostedemail.com [216.40.44.137]) by kanga.kvack.org (Postfix) with ESMTP id 7A9126B0070 for ; Thu, 13 May 2021 13:00:09 -0400 (EDT) Received: from smtpin36.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 0DE2D18140B9C for ; Thu, 13 May 2021 17:00:09 +0000 (UTC) X-FDA: 78136820538.36.EF4ADCA Received: from mail-ed1-f41.google.com (mail-ed1-f41.google.com [209.85.208.41]) by imf05.hostedemail.com (Postfix) with ESMTP id BAE35E002025 for ; Thu, 13 May 2021 17:00:07 +0000 (UTC) Received: by mail-ed1-f41.google.com with SMTP id f1so9529526edt.4 for ; Thu, 13 May 2021 10:00:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=aJ11lVMhqfzweOJ3ay4/tPGlK2PxGXGfkKMCyU2xp1w=; b=JStSEy3+jZBXey6P0GWgTiS4p1oUdxA/HmDoESYPCx4uD9UvjYpEeFLkeNqGpLSFp6 EGLTZJEhG3Du8KSKMbACD1TnnRur7TtZf7qqgNYcZ0H4OP88cSwbFxXZfTIwNMYzb9Ts pNYg+rk2lNtSf+mnm9MPgAjC9rs5pk9rapLfK0+W5yklDvWB8WLVE9lm2IwinR9uJd7u PBmfBc6C5GnFYXKUl8XTNXdk2f+0f9EHsotbWAVSxsOVvedJEgt+R75oC8a8NI8P7OeN X/RxexzfI0Ybov5jnUUeA3DFd7LvCgYcDlLkXM+c/SHSd2QSSQiQK9skP6H/L1wX9ejV d1qQ== X-Gm-Message-State: AOAM531wmD8ZK2Rb1qUR+wjayOBEyZZ5lWv41e3GynWTuDgWuoGyupzr 9ns2+OC7PxGcXA1hklFyRg8= X-Google-Smtp-Source: ABdhPJz78AblB+Dytg88UNqBKz2trgLIRPEr4K7NeibFB2zuV2wvD405/VSLNB5AkNpELTdvTz0eTw== X-Received: by 2002:a05:6402:284:: with SMTP id l4mr52315371edv.299.1620925207391; Thu, 13 May 2021 10:00:07 -0700 (PDT) Received: from msft-t490s.teknoraver.net (net-5-94-253-60.cust.vodafonedsl.it. [5.94.253.60]) by smtp.gmail.com with ESMTPSA id w11sm2959431ede.54.2021.05.13.10.00.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 May 2021 10:00:06 -0700 (PDT) From: Matteo Croce To: netdev@vger.kernel.org, linux-mm@kvack.org Cc: Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , "David S. Miller" , Jakub Kicinski , Thomas Petazzoni , Marcin Wojtas , Russell King , Mirko Lindner , Stephen Hemminger , Tariq Toukan , Jesper Dangaard Brouer , Ilias Apalodimas , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Boris Pismenny , Arnd Bergmann , Andrew Morton , "Peter Zijlstra (Intel)" , Vlastimil Babka , Yu Zhao , Will Deacon , Fenghua Yu , Roman Gushchin , Hugh Dickins , Peter Xu , Jason Gunthorpe , Jonathan Lemon , Alexander Lobakin , Cong Wang , wenxu , Kevin Hao , Jakub Sitnicki , Marco Elver , Willem de Bruijn , Miaohe Lin , Yunsheng Lin , Guillaume Nault , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, Matthew Wilcox , Eric Dumazet , David Ahern , Lorenzo Bianconi , Saeed Mahameed , Andrew Lunn , Paolo Abeni , Sven Auhagen Subject: [PATCH net-next v5 4/5] mvpp2: recycle buffers Date: Thu, 13 May 2021 18:58:45 +0200 Message-Id: <20210513165846.23722-5-mcroce@linux.microsoft.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210513165846.23722-1-mcroce@linux.microsoft.com> References: <20210513165846.23722-1-mcroce@linux.microsoft.com> MIME-Version: 1.0 Authentication-Results: imf05.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=linux.microsoft.com (policy=none); spf=pass (imf05.hostedemail.com: domain of technoboy85@gmail.com designates 209.85.208.41 as permitted sender) smtp.mailfrom=technoboy85@gmail.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: BAE35E002025 X-Stat-Signature: ipcjh9756p5twar8f8uyj4m8gpfp4nos X-HE-Tag: 1620925207-793442 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Matteo Croce Use the new recycling API for page_pool. In a drop rate test, the packet rate is more than doubled, from 962 Kpps to 2047 Kpps. perf top on a stock system shows: Overhead Shared Object Symbol 30.67% [kernel] [k] page_pool_release_page 8.37% [kernel] [k] get_page_from_freelist 7.34% [kernel] [k] free_unref_page 6.47% [mvpp2] [k] mvpp2_rx 4.69% [kernel] [k] eth_type_trans 4.55% [kernel] [k] __netif_receive_skb_core 4.40% [kernel] [k] build_skb 4.29% [kernel] [k] kmem_cache_free 4.00% [kernel] [k] kmem_cache_alloc 3.81% [kernel] [k] dev_gro_receive With packet rate stable at 962 Kpps: tx: 0 bps 0 pps rx: 477.4 Mbps 962.6 Kpps tx: 0 bps 0 pps rx: 477.6 Mbps 962.8 Kpps tx: 0 bps 0 pps rx: 477.6 Mbps 962.9 Kpps tx: 0 bps 0 pps rx: 477.2 Mbps 962.1 Kpps tx: 0 bps 0 pps rx: 477.5 Mbps 962.7 Kpps And this is the same output with recycling enabled: Overhead Shared Object Symbol 12.75% [mvpp2] [k] mvpp2_rx 9.56% [kernel] [k] __netif_receive_skb_core 9.29% [kernel] [k] build_skb 9.27% [kernel] [k] eth_type_trans 8.39% [kernel] [k] kmem_cache_alloc 7.85% [kernel] [k] kmem_cache_free 7.36% [kernel] [k] page_pool_put_page 6.45% [kernel] [k] dev_gro_receive 4.72% [kernel] [k] __xdp_return 3.06% [kernel] [k] page_pool_refill_alloc_cache With packet rate above 2000 Kpps: tx: 0 bps 0 pps rx: 1015 Mbps 2046 Kpps tx: 0 bps 0 pps rx: 1015 Mbps 2047 Kpps tx: 0 bps 0 pps rx: 1015 Mbps 2047 Kpps tx: 0 bps 0 pps rx: 1015 Mbps 2047 Kpps tx: 0 bps 0 pps rx: 1015 Mbps 2047 Kpps The major performance increase is explained by the fact that the most CPU consuming functions (page_pool_release_page, get_page_from_freelist and free_unref_page) are no longer called on a per packet basis. The test was done by sending to the macchiatobin 64 byte ethernet frames with an invalid ethertype, so the packets are dropped early in the RX pat= h. Signed-off-by: Matteo Croce --- drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/ne= t/ethernet/marvell/mvpp2/mvpp2_main.c index b2259bf1d299..9dceabece56c 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c @@ -3847,6 +3847,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct= napi_struct *napi, struct mvpp2_pcpu_stats ps =3D {}; enum dma_data_direction dma_dir; struct bpf_prog *xdp_prog; + struct xdp_rxq_info *rxqi; struct xdp_buff xdp; int rx_received; int rx_done =3D 0; @@ -3912,15 +3913,15 @@ static int mvpp2_rx(struct mvpp2_port *port, stru= ct napi_struct *napi, else frag_size =3D bm_pool->frag_size; =20 - if (xdp_prog) { - struct xdp_rxq_info *xdp_rxq; + if (bm_pool->pkt_size =3D=3D MVPP2_BM_SHORT_PKT_SIZE) + rxqi =3D &rxq->xdp_rxq_short; + else + rxqi =3D &rxq->xdp_rxq_long; =20 - if (bm_pool->pkt_size =3D=3D MVPP2_BM_SHORT_PKT_SIZE) - xdp_rxq =3D &rxq->xdp_rxq_short; - else - xdp_rxq =3D &rxq->xdp_rxq_long; + if (xdp_prog) { + xdp.rxq =3D rxqi; =20 - xdp_init_buff(&xdp, PAGE_SIZE, xdp_rxq); + xdp_init_buff(&xdp, PAGE_SIZE, rxqi); xdp_prepare_buff(&xdp, data, MVPP2_MH_SIZE + MVPP2_SKB_HEADROOM, rx_bytes, false); @@ -3964,7 +3965,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct= napi_struct *napi, } =20 if (pp) - page_pool_release_page(pp, virt_to_page(data)); + skb_mark_for_recycle(skb, virt_to_page(data), pp); else dma_unmap_single_attrs(dev->dev.parent, dma_addr, bm_pool->buf_size, DMA_FROM_DEVICE, --=20 2.31.1