bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
To: Yunsheng Lin <linyunsheng@huawei.com>
Cc: Matteo Croce <mcroce@linux.microsoft.com>,
	netdev@vger.kernel.org, linux-mm@kvack.org,
	Ayush Sawal <ayush.sawal@chelsio.com>,
	Vinay Kumar Yadav <vinay.yadav@chelsio.com>,
	Rohit Maheshwari <rohitm@chelsio.com>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Thomas Petazzoni <thomas.petazzoni@bootlin.com>,
	Marcin Wojtas <mw@semihalf.com>,
	Russell King <linux@armlinux.org.uk>,
	Mirko Lindner <mlindner@marvell.com>,
	Stephen Hemminger <stephen@networkplumber.org>,
	Tariq Toukan <tariqt@nvidia.com>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	John Fastabend <john.fastabend@gmail.com>,
	Boris Pismenny <borisp@nvidia.com>, Arnd Bergmann <arnd@arndb.de>,
	Andrew Morton <akpm@linux-foundation.org>,
	"Peter Zijlstra (Intel)" <peterz@infradead.org>,
	Vlastimil Babka <vbabka@suse.cz>, Yu Zhao <yuzhao@google.com>,
	Will Deacon <will@kernel.org>,
	Michel Lespinasse <walken@google.com>,
	Fenghua Yu <fenghua.yu@intel.com>, Roman Gushchin <guro@fb.com>,
	Hugh Dickins <hughd@google.com>, Peter Xu <peterx@redhat.com>,
	Jason Gunthorpe <jgg@ziepe.ca>,
	Guoqing Jiang <guoqing.jiang@cloud.ionos.com>,
	Jonathan Lemon <jonathan.lemon@gmail.com>,
	Alexander Lobakin <alobakin@pm.me>,
	Cong Wang <cong.wang@bytedance.com>, wenxu <wenxu@ucloud.cn>,
	Kevin Hao <haokexin@gmail.com>,
	Aleksandr Nogikh <nogikh@google.com>,
	Jakub Sitnicki <jakub@cloudflare.com>,
	Marco Elver <elver@google.com>,
	Willem de Bruijn <willemb@google.com>,
	Miaohe Lin <linmiaohe@huawei.com>,
	Guillaume Nault <gnault@redhat.com>,
	linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org,
	bpf@vger.kernel.org, Matthew Wilcox <willy@infradead.org>,
	Eric Dumazet <edumazet@google.com>,
	David Ahern <dsahern@gmail.com>,
	Lorenzo Bianconi <lorenzo@kernel.org>,
	Saeed Mahameed <saeedm@nvidia.com>, Andrew Lunn <andrew@lunn.ch>,
	Paolo Abeni <pabeni@redhat.com>
Subject: Re: [PATCH net-next v3 0/5] page_pool: recycle buffers
Date: Thu, 29 Apr 2021 21:51:15 +0300	[thread overview]
Message-ID: <YIsAIzecktXXBlxn@apalos.home> (raw)
In-Reply-To: <e873c16e-8f49-6e70-1f56-21a69e2e37ce@huawei.com>

Hi Yunsheng,

On Thu, Apr 29, 2021 at 04:27:21PM +0800, Yunsheng Lin wrote:
> On 2021/4/10 6:37, Matteo Croce wrote:
> > From: Matteo Croce <mcroce@microsoft.com>
> > 
> > This is a respin of [1]
> > 
> > This  patchset shows the plans for allowing page_pool to handle and
> > maintain DMA map/unmap of the pages it serves to the driver.  For this
> > to work a return hook in the network core is introduced.
> > 
> > The overall purpose is to simplify drivers, by providing a page
> > allocation API that does recycling, such that each driver doesn't have
> > to reinvent its own recycling scheme.  Using page_pool in a driver
> > does not require implementing XDP support, but it makes it trivially
> > easy to do so.  Instead of allocating buffers specifically for SKBs
> > we now allocate a generic buffer and either wrap it on an SKB
> > (via build_skb) or create an XDP frame.
> > The recycling code leverages the XDP recycle APIs.
> > 
> > The Marvell mvpp2 and mvneta drivers are used in this patchset to
> > demonstrate how to use the API, and tested on a MacchiatoBIN
> > and EspressoBIN boards respectively.
> > 
> 
> Hi, Matteo
>      I added the skb frag page recycling in hns3 based on this patchset,
> and it has above 10%~20% performance improvement for one thread iperf
> TCP flow(IOMMU is off, there may be more performance improvement if
> considering the DMA map/unmap avoiding for IOMMU), thanks for the job.
> 
>     The skb frag page recycling support in hns3 driver is not so simple
> as the mvpp2 and mvneta driver, because:
> 
> 1. the hns3 driver do not have XDP support yet, so "struct xdp_rxq_info"
>    is added to assist relation binding between the "struct page" and
>    "struct page_pool".
> 
> 2. the hns3 driver has already a page reusing based on page spliting and
>    page reference count, but it may not work if the upper stack can not
>    handle skb and release the corresponding page fast enough.
> 
> 3. the hns3 driver support page reference count updating batching, see:
>    aeda9bf87a45 ("net: hns3: batch the page reference count updates")
> 
> So it would be better if:
> 
> 1. skb frag page recycling do not need "struct xdp_rxq_info" or
>    "struct xdp_mem_info" to bond the relation between "struct page" and
>    "struct page_pool", which seems uncessary at this point if bonding
>    a "struct page_pool" pointer directly in "struct page" does not cause
>    space increasing.

We can't do that. The reason we need those structs is that we rely on the
existing XDP code, which already recycles it's buffers, to enable
recycling.  Since we allocate a page per packet when using page_pool for a
driver , the same ideas apply to an SKB and XDP frame. We just recycle the
payload and we don't really care what's in that.  We could rename the functions
to something more generic in the future though ?

> 
> 2. it would be good to do the page reference count updating batching
>    in page pool instead of specific driver.
> 
> 
> page_pool_atomic_sub_if_positive() is added to decide who can call
> page_pool_put_full_page(), because the driver and stack may hold
> reference to the same page, only if last one which hold complete
> reference to a page can call page_pool_put_full_page() to decide if
> recycling is possible, if not, the page is released, so I am wondering
> if a similar page_pool_atomic_sub_if_positive() can added to specific
> user space address unmapping path to allow skb recycling for RX zerocopy
> too?
> 

I would prefer a different page pool type if we wanted to support the split
page model.  The changes as is are quite intrusive, since they change the 
entire skb return path.  So I would prefer introducing the changes one at a 
time. 

The fundamental difference between having the recycling in the driver vs
having it in a generic API is pretty straightforward.  When a driver holds
the extra page references he is free to decide what to reuse, when he is about
to refill his Rx descriptors.  So TCP zerocopy might work even if the
userspace applications hold the buffers for an X amount of time.
On this proposal though we *need* to decide what to do with the buffer when we
are about to free the skb.

[...]


Cheers
/Ilias

  reply	other threads:[~2021-04-29 18:51 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-09 22:37 [PATCH net-next v3 0/5] page_pool: recycle buffers Matteo Croce
2021-04-09 22:37 ` [PATCH net-next v3 1/5] xdp: reduce size of struct xdp_mem_info Matteo Croce
2021-04-09 22:37 ` [PATCH net-next v3 2/5] mm: add a signature in struct page Matteo Croce
2021-04-10 15:48   ` Matthew Wilcox
2021-04-10 16:16     ` Ilias Apalodimas
2021-04-10 17:42       ` Shakeel Butt
2021-04-10 18:27         ` Ilias Apalodimas
2021-04-10 19:39           ` Matthew Wilcox
2021-04-11 10:05             ` Jesper Dangaard Brouer
2021-04-14 19:41           ` Jesper Dangaard Brouer
2021-04-14 20:09             ` Shakeel Butt
2021-04-14 20:51               ` Eric Dumazet
2021-04-19  5:12               ` Ilias Apalodimas
2021-04-19 14:57                 ` Shakeel Butt
2021-04-19 15:43                   ` Ilias Apalodimas
2021-04-19 16:21                     ` Shakeel Butt
2021-04-19 18:41                       ` Ilias Apalodimas
2021-04-19 11:22               ` Jesper Dangaard Brouer
2021-04-19 13:01                 ` Matthew Wilcox
2021-04-20  8:10                   ` Ilias Apalodimas
2021-04-09 22:37 ` [PATCH net-next v3 3/5] page_pool: Allow drivers to hint on SKB recycling Matteo Croce
2021-04-10  0:11   ` Ilias Apalodimas
2021-04-10  0:39     ` Matteo Croce
2021-04-09 22:38 ` [PATCH net-next v3 4/5] mvpp2: recycle buffers Matteo Croce
2021-04-09 22:38 ` [PATCH net-next v3 5/5] mvneta: " Matteo Croce
2021-04-29  8:27 ` [PATCH net-next v3 0/5] page_pool: " Yunsheng Lin
2021-04-29 18:51   ` Ilias Apalodimas [this message]
2021-04-30  3:01     ` Yunsheng Lin
2021-04-30 16:24       ` Ilias Apalodimas
2021-04-30 17:32         ` Ilias Apalodimas
2021-05-03  7:29           ` Jesper Dangaard Brouer
2021-05-06 12:34         ` Yunsheng Lin
2021-05-06 12:58           ` Ilias Apalodimas
2021-05-07  3:23             ` Yunsheng Lin
2021-05-07  7:06               ` Ilias Apalodimas
2021-05-07  8:28                 ` Yunsheng Lin
2021-05-07 10:19                   ` Jesper Dangaard Brouer
2021-05-07 11:31                     ` Christoph Hellwig
2021-05-09  5:11                     ` Shay Agroskin
2021-05-11  8:41                       ` Ilias Apalodimas
2021-05-10  2:20                     ` Yunsheng Lin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YIsAIzecktXXBlxn@apalos.home \
    --to=ilias.apalodimas@linaro.org \
    --cc=akpm@linux-foundation.org \
    --cc=alobakin@pm.me \
    --cc=andrew@lunn.ch \
    --cc=arnd@arndb.de \
    --cc=ast@kernel.org \
    --cc=ayush.sawal@chelsio.com \
    --cc=borisp@nvidia.com \
    --cc=bpf@vger.kernel.org \
    --cc=cong.wang@bytedance.com \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=dsahern@gmail.com \
    --cc=edumazet@google.com \
    --cc=elver@google.com \
    --cc=fenghua.yu@intel.com \
    --cc=gnault@redhat.com \
    --cc=guoqing.jiang@cloud.ionos.com \
    --cc=guro@fb.com \
    --cc=haokexin@gmail.com \
    --cc=hawk@kernel.org \
    --cc=hughd@google.com \
    --cc=jakub@cloudflare.com \
    --cc=jgg@ziepe.ca \
    --cc=john.fastabend@gmail.com \
    --cc=jonathan.lemon@gmail.com \
    --cc=kuba@kernel.org \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=linux@armlinux.org.uk \
    --cc=linyunsheng@huawei.com \
    --cc=lorenzo@kernel.org \
    --cc=mcroce@linux.microsoft.com \
    --cc=mlindner@marvell.com \
    --cc=mw@semihalf.com \
    --cc=netdev@vger.kernel.org \
    --cc=nogikh@google.com \
    --cc=pabeni@redhat.com \
    --cc=peterx@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rohitm@chelsio.com \
    --cc=saeedm@nvidia.com \
    --cc=stephen@networkplumber.org \
    --cc=tariqt@nvidia.com \
    --cc=thomas.petazzoni@bootlin.com \
    --cc=vbabka@suse.cz \
    --cc=vinay.yadav@chelsio.com \
    --cc=walken@google.com \
    --cc=wenxu@ucloud.cn \
    --cc=will@kernel.org \
    --cc=willemb@google.com \
    --cc=willy@infradead.org \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).