bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Yunsheng Lin <linyunsheng@huawei.com>
To: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Cc: Matteo Croce <mcroce@linux.microsoft.com>,
	<netdev@vger.kernel.org>, <linux-mm@kvack.org>,
	Ayush Sawal <ayush.sawal@chelsio.com>,
	"Vinay Kumar Yadav" <vinay.yadav@chelsio.com>,
	Rohit Maheshwari <rohitm@chelsio.com>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Thomas Petazzoni <thomas.petazzoni@bootlin.com>,
	Marcin Wojtas <mw@semihalf.com>,
	Russell King <linux@armlinux.org.uk>,
	Mirko Lindner <mlindner@marvell.com>,
	Stephen Hemminger <stephen@networkplumber.org>,
	"Tariq Toukan" <tariqt@nvidia.com>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	"Alexei Starovoitov" <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	"John Fastabend" <john.fastabend@gmail.com>,
	Boris Pismenny <borisp@nvidia.com>, Arnd Bergmann <arnd@arndb.de>,
	Andrew Morton <akpm@linux-foundation.org>,
	"Peter Zijlstra (Intel)" <peterz@infradead.org>,
	Vlastimil Babka <vbabka@suse.cz>, Yu Zhao <yuzhao@google.com>,
	Will Deacon <will@kernel.org>,
	Michel Lespinasse <walken@google.com>,
	Fenghua Yu <fenghua.yu@intel.com>, Roman Gushchin <guro@fb.com>,
	Hugh Dickins <hughd@google.com>, Peter Xu <peterx@redhat.com>,
	Jason Gunthorpe <jgg@ziepe.ca>,
	Guoqing Jiang <guoqing.jiang@cloud.ionos.com>,
	Jonathan Lemon <jonathan.lemon@gmail.com>,
	Alexander Lobakin <alobakin@pm.me>,
	Cong Wang <cong.wang@bytedance.com>, wenxu <wenxu@ucloud.cn>,
	Kevin Hao <haokexin@gmail.com>,
	Aleksandr Nogikh <nogikh@google.com>,
	Jakub Sitnicki <jakub@cloudflare.com>,
	Marco Elver <elver@google.com>,
	Willem de Bruijn <willemb@google.com>,
	Miaohe Lin <linmiaohe@huawei.com>,
	Guillaume Nault <gnault@redhat.com>,
	<linux-kernel@vger.kernel.org>, <linux-rdma@vger.kernel.org>,
	<bpf@vger.kernel.org>, Matthew Wilcox <willy@infradead.org>,
	Eric Dumazet <edumazet@google.com>,
	David Ahern <dsahern@gmail.com>,
	Lorenzo Bianconi <lorenzo@kernel.org>,
	Saeed Mahameed <saeedm@nvidia.com>, Andrew Lunn <andrew@lunn.ch>,
	Paolo Abeni <pabeni@redhat.com>
Subject: Re: [PATCH net-next v3 0/5] page_pool: recycle buffers
Date: Fri, 30 Apr 2021 11:01:48 +0800	[thread overview]
Message-ID: <9bf7c5b3-c3cf-e669-051f-247aa8df5c5a@huawei.com> (raw)
In-Reply-To: <YIsAIzecktXXBlxn@apalos.home>

On 2021/4/30 2:51, Ilias Apalodimas wrote:
> Hi Yunsheng,
> 
> On Thu, Apr 29, 2021 at 04:27:21PM +0800, Yunsheng Lin wrote:
>> On 2021/4/10 6:37, Matteo Croce wrote:
>>> From: Matteo Croce <mcroce@microsoft.com>

[...]

>>
>> 1. skb frag page recycling do not need "struct xdp_rxq_info" or
>>    "struct xdp_mem_info" to bond the relation between "struct page" and
>>    "struct page_pool", which seems uncessary at this point if bonding
>>    a "struct page_pool" pointer directly in "struct page" does not cause
>>    space increasing.
> 
> We can't do that. The reason we need those structs is that we rely on the
> existing XDP code, which already recycles it's buffers, to enable
> recycling.  Since we allocate a page per packet when using page_pool for a
> driver , the same ideas apply to an SKB and XDP frame. We just recycle the

I am not really familar with XDP here, but a packet from hw is either a
"struct xdp_frame/xdp_buff" for XDP or a "struct sk_buff" for TCP/IP stack,
a packet can not be both "struct xdp_frame/xdp_buff" and "struct sk_buff" at
the same time, right?

What does not really make sense to me is that the page has to be from page
pool when a skb's frag page can be recycled, right? If it is ture, the switch
case in __xdp_return() does not really make sense for skb recycling, why go
all the trouble of checking the mem->type and mem->id to find the page_pool
pointer when recyclable page for skb can only be from page pool?

> payload and we don't really care what's in that.  We could rename the functions
> to something more generic in the future though ?
> 
>>
>> 2. it would be good to do the page reference count updating batching
>>    in page pool instead of specific driver.
>>
>>
>> page_pool_atomic_sub_if_positive() is added to decide who can call
>> page_pool_put_full_page(), because the driver and stack may hold
>> reference to the same page, only if last one which hold complete
>> reference to a page can call page_pool_put_full_page() to decide if
>> recycling is possible, if not, the page is released, so I am wondering
>> if a similar page_pool_atomic_sub_if_positive() can added to specific
>> user space address unmapping path to allow skb recycling for RX zerocopy
>> too?
>>
> 
> I would prefer a different page pool type if we wanted to support the split
> page model.  The changes as is are quite intrusive, since they change the 
> entire skb return path.  So I would prefer introducing the changes one at a 
> time. 

I understand there may be fundamental semantic change when split page model
is supported by page pool, but the split page support change mainly affect the
skb recycling path and the driver that uses page pool(XDP too) if we are careful
enough, not the entire skb return path as my understanding.

Anyway, one changes at a time is always prefered if supporting split page is
proved to be non-trivel and intrusive.

> 
> The fundamental difference between having the recycling in the driver vs
> having it in a generic API is pretty straightforward.  When a driver holds
> the extra page references he is free to decide what to reuse, when he is about
> to refill his Rx descriptors.  So TCP zerocopy might work even if the
> userspace applications hold the buffers for an X amount of time.
> On this proposal though we *need* to decide what to do with the buffer when we
> are about to free the skb.

I am not sure I understand what you meant by "free the skb", does it mean
that kfree_skb() is called to free the skb.

As my understanding, if the skb completely own the page(which means page_count()
== 1) when kfree_skb() is called, __page_pool_put_page() is called, otherwise
page_ref_dec() is called, which is exactly what page_pool_atomic_sub_if_positive()
try to handle it atomically.

> 
> [...]
> 
> 
> Cheers
> /Ilias
> 
> .
> 


  reply	other threads:[~2021-04-30  3:01 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-09 22:37 [PATCH net-next v3 0/5] page_pool: recycle buffers Matteo Croce
2021-04-09 22:37 ` [PATCH net-next v3 1/5] xdp: reduce size of struct xdp_mem_info Matteo Croce
2021-04-09 22:37 ` [PATCH net-next v3 2/5] mm: add a signature in struct page Matteo Croce
2021-04-10 15:48   ` Matthew Wilcox
2021-04-10 16:16     ` Ilias Apalodimas
2021-04-10 17:42       ` Shakeel Butt
2021-04-10 18:27         ` Ilias Apalodimas
2021-04-10 19:39           ` Matthew Wilcox
2021-04-11 10:05             ` Jesper Dangaard Brouer
2021-04-14 19:41           ` Jesper Dangaard Brouer
2021-04-14 20:09             ` Shakeel Butt
2021-04-14 20:51               ` Eric Dumazet
2021-04-19  5:12               ` Ilias Apalodimas
2021-04-19 14:57                 ` Shakeel Butt
2021-04-19 15:43                   ` Ilias Apalodimas
2021-04-19 16:21                     ` Shakeel Butt
2021-04-19 18:41                       ` Ilias Apalodimas
2021-04-19 11:22               ` Jesper Dangaard Brouer
2021-04-19 13:01                 ` Matthew Wilcox
2021-04-20  8:10                   ` Ilias Apalodimas
2021-04-09 22:37 ` [PATCH net-next v3 3/5] page_pool: Allow drivers to hint on SKB recycling Matteo Croce
2021-04-10  0:11   ` Ilias Apalodimas
2021-04-10  0:39     ` Matteo Croce
2021-04-09 22:38 ` [PATCH net-next v3 4/5] mvpp2: recycle buffers Matteo Croce
2021-04-09 22:38 ` [PATCH net-next v3 5/5] mvneta: " Matteo Croce
2021-04-29  8:27 ` [PATCH net-next v3 0/5] page_pool: " Yunsheng Lin
2021-04-29 18:51   ` Ilias Apalodimas
2021-04-30  3:01     ` Yunsheng Lin [this message]
2021-04-30 16:24       ` Ilias Apalodimas
2021-04-30 17:32         ` Ilias Apalodimas
2021-05-03  7:29           ` Jesper Dangaard Brouer
2021-05-06 12:34         ` Yunsheng Lin
2021-05-06 12:58           ` Ilias Apalodimas
2021-05-07  3:23             ` Yunsheng Lin
2021-05-07  7:06               ` Ilias Apalodimas
2021-05-07  8:28                 ` Yunsheng Lin
2021-05-07 10:19                   ` Jesper Dangaard Brouer
2021-05-07 11:31                     ` Christoph Hellwig
2021-05-09  5:11                     ` Shay Agroskin
2021-05-11  8:41                       ` Ilias Apalodimas
2021-05-10  2:20                     ` Yunsheng Lin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9bf7c5b3-c3cf-e669-051f-247aa8df5c5a@huawei.com \
    --to=linyunsheng@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=alobakin@pm.me \
    --cc=andrew@lunn.ch \
    --cc=arnd@arndb.de \
    --cc=ast@kernel.org \
    --cc=ayush.sawal@chelsio.com \
    --cc=borisp@nvidia.com \
    --cc=bpf@vger.kernel.org \
    --cc=cong.wang@bytedance.com \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=dsahern@gmail.com \
    --cc=edumazet@google.com \
    --cc=elver@google.com \
    --cc=fenghua.yu@intel.com \
    --cc=gnault@redhat.com \
    --cc=guoqing.jiang@cloud.ionos.com \
    --cc=guro@fb.com \
    --cc=haokexin@gmail.com \
    --cc=hawk@kernel.org \
    --cc=hughd@google.com \
    --cc=ilias.apalodimas@linaro.org \
    --cc=jakub@cloudflare.com \
    --cc=jgg@ziepe.ca \
    --cc=john.fastabend@gmail.com \
    --cc=jonathan.lemon@gmail.com \
    --cc=kuba@kernel.org \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=linux@armlinux.org.uk \
    --cc=lorenzo@kernel.org \
    --cc=mcroce@linux.microsoft.com \
    --cc=mlindner@marvell.com \
    --cc=mw@semihalf.com \
    --cc=netdev@vger.kernel.org \
    --cc=nogikh@google.com \
    --cc=pabeni@redhat.com \
    --cc=peterx@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rohitm@chelsio.com \
    --cc=saeedm@nvidia.com \
    --cc=stephen@networkplumber.org \
    --cc=tariqt@nvidia.com \
    --cc=thomas.petazzoni@bootlin.com \
    --cc=vbabka@suse.cz \
    --cc=vinay.yadav@chelsio.com \
    --cc=walken@google.com \
    --cc=wenxu@ucloud.cn \
    --cc=will@kernel.org \
    --cc=willemb@google.com \
    --cc=willy@infradead.org \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).