linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yunsheng Lin <linyunsheng@huawei.com>
To: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Cc: Matteo Croce <mcroce@linux.microsoft.com>,
	<netdev@vger.kernel.org>, <linux-mm@kvack.org>,
	Ayush Sawal <ayush.sawal@chelsio.com>,
	"Vinay Kumar Yadav" <vinay.yadav@chelsio.com>,
	Rohit Maheshwari <rohitm@chelsio.com>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Thomas Petazzoni <thomas.petazzoni@bootlin.com>,
	Marcin Wojtas <mw@semihalf.com>,
	Russell King <linux@armlinux.org.uk>,
	Mirko Lindner <mlindner@marvell.com>,
	Stephen Hemminger <stephen@networkplumber.org>,
	"Tariq Toukan" <tariqt@nvidia.com>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	"Alexei Starovoitov" <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	"John Fastabend" <john.fastabend@gmail.com>,
	Boris Pismenny <borisp@nvidia.com>, Arnd Bergmann <arnd@arndb.de>,
	Andrew Morton <akpm@linux-foundation.org>,
	"Peter Zijlstra (Intel)" <peterz@infradead.org>,
	Vlastimil Babka <vbabka@suse.cz>, Yu Zhao <yuzhao@google.com>,
	Will Deacon <will@kernel.org>, Fenghua Yu <fenghua.yu@intel.com>,
	Roman Gushchin <guro@fb.com>, Hugh Dickins <hughd@google.com>,
	Peter Xu <peterx@redhat.com>, Jason Gunthorpe <jgg@ziepe.ca>,
	Jonathan Lemon <jonathan.lemon@gmail.com>,
	Alexander Lobakin <alobakin@pm.me>,
	Cong Wang <cong.wang@bytedance.com>, wenxu <wenxu@ucloud.cn>,
	Kevin Hao <haokexin@gmail.com>,
	Jakub Sitnicki <jakub@cloudflare.com>,
	Marco Elver <elver@google.com>,
	Willem de Bruijn <willemb@google.com>,
	Miaohe Lin <linmiaohe@huawei.com>,
	Guillaume Nault <gnault@redhat.com>,
	<linux-kernel@vger.kernel.org>, <linux-rdma@vger.kernel.org>,
	<bpf@vger.kernel.org>, Matthew Wilcox <willy@infradead.org>,
	Eric Dumazet <edumazet@google.com>,
	David Ahern <dsahern@gmail.com>,
	Lorenzo Bianconi <lorenzo@kernel.org>,
	Saeed Mahameed <saeedm@nvidia.com>, Andrew Lunn <andrew@lunn.ch>,
	Paolo Abeni <pabeni@redhat.com>,
	Sven Auhagen <sven.auhagen@voleatech.de>
Subject: Re: [PATCH net-next v5 3/5] page_pool: Allow drivers to hint on SKB recycling
Date: Sat, 15 May 2021 10:07:06 +0800	[thread overview]
Message-ID: <cd0c0a2b-986e-a672-de7e-798ab2843d76@huawei.com> (raw)
In-Reply-To: <YJ5APhzabmAKIKCE@apalos.home>

On 2021/5/14 17:17, Ilias Apalodimas wrote:
> On Fri, May 14, 2021 at 04:31:50PM +0800, Yunsheng Lin wrote:
>> On 2021/5/14 15:36, Ilias Apalodimas wrote:
>>> [...]
>>>>> +		return false;
>>>>> +
>>>>> +	pp = (struct page_pool *)page->pp;
>>>>> +
>>>>> +	/* Driver set this to memory recycling info. Reset it on recycle.
>>>>> +	 * This will *not* work for NIC using a split-page memory model.
>>>>> +	 * The page will be returned to the pool here regardless of the
>>>>> +	 * 'flipped' fragment being in use or not.
>>>>> +	 */
>>>>> +	page->pp = NULL;
>>>>
>>>> Why not only clear the page->pp when the page can not be recycled
>>>> by the page pool? so that we do not need to set and clear it every
>>>> time the page is recycled。
>>>>
>>>
>>> If the page cannot be recycled, page->pp will not probably be set to begin
>>> with. Since we don't embed the feature in page_pool and we require the
>>> driver to explicitly enable it, as part of the 'skb flow', I'd rather keep 
>>> it as is.  When we set/clear the page->pp, the page is probably already in 
>>> cache, so I doubt this will have any measurable impact.
>>
>> The point is that we already have the skb->pp_recycle to let driver to
>> explicitly enable recycling, as part of the 'skb flow, if the page pool keep
>> the page->pp while it owns the page, then the driver may only need to call
>> one skb_mark_for_recycle() for a skb, instead of call skb_mark_for_recycle()
>> for each page frag of a skb.
>>
> 
> The driver is meant to call skb_mark_for_recycle for the skb and
> page_pool_store_mem_info() for the fragments (in order to store page->pp).
> Nothing bad will happen if you call skb_mark_for_recycle on a frag though,
> but in any case you need to store the page_pool pointer of each frag to
> struct page.

Right. Nothing bad will happen when we keep the page_pool pointer in
page->pp while page pool owns the page too, even if the skb->pp_recycle
is not set, right?

> 
>> Maybe we can add a parameter in "struct page_pool_params" to let driver
>> to decide if the page pool ptr is stored in page->pp while the page pool
>> owns the page?
> 
> Then you'd have to check the page pool config before saving the meta-data,

I am not sure what the "saving the meta-data" meant?

> and you would have to make the skb path aware of that as well (I assume you
> mean replace pp_recycle with this?).

I meant we could set the in page->pp when the page is allocated from
alloc_pages() in __page_pool_alloc_pages_slow() unconditionally or
according to a newly add filed in pool->p, and only clear it in
page_pool_release_page(), between which the page is owned by page pool,
right?

> If not and you just want to add an extra flag on page_pool_params and be able 
> to enable recycling depending on that flag, we just add a patch afterwards.
> I am not sure we need an extra if for each packet though.

In that case, the skb_mark_for_recycle() could only set the skb->pp_recycle,
but not the pool->p.

> 
>>
>> Another thing accured to me is that if the driver use page from the
>> page pool to form a skb, and it does not call skb_mark_for_recycle(),
>> then there will be resource leaking, right? if yes, it seems the
>> skb_mark_for_recycle() call does not seems to add any value?
>>
> 
> Not really, the driver has 2 choices:
> - call page_pool_release_page() once it receives the payload. That will
>   clean up dma mappings (if page pool is responsible for them) and free the
>   buffer

The is only needed before SKB recycling is supported or the driver does not
want the SKB recycling support explicitly, right?

> - call skb_mark_for_recycle(). Which will end up recycling the buffer.

If the driver need to add extra flag to enable recycling based on skb
instead of page pool, then adding skb_mark_for_recycle() makes sense to
me too, otherwise it seems adding a field in pool->p to recycling based
on skb makes more sense?

> 
> If you call none of those, you'd leak a page, but that's a driver bug.
> patches [4/5, 5/5] do that for two marvell drivers.
> I really want to make drivers opt-in in the feature instead of always
> enabling it.
> 
> Thanks
> /Ilias
>>
>>>
>>>>> +	page_pool_put_full_page(pp, virt_to_head_page(data), false);
>>>>> +
>>>>>  	C(end);
>>>
>>> [...]
>>
>>
> 
> .
> 



  reply	other threads:[~2021-05-15  2:07 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-13 16:58 [PATCH net-next v5 0/5] page_pool: recycle buffers Matteo Croce
2021-05-13 16:58 ` [PATCH net-next v5 1/5] mm: add a signature in struct page Matteo Croce
2021-05-14  1:00   ` Matthew Wilcox
2021-05-14  1:34     ` Matteo Croce
2021-05-18 15:44     ` Matteo Croce
2021-05-13 16:58 ` [PATCH net-next v5 2/5] skbuff: add a parameter to __skb_frag_unref Matteo Croce
2021-05-13 16:58 ` [PATCH net-next v5 3/5] page_pool: Allow drivers to hint on SKB recycling Matteo Croce
2021-05-14  3:39   ` Yunsheng Lin
2021-05-14  7:36     ` Ilias Apalodimas
2021-05-14  8:31       ` Yunsheng Lin
2021-05-14  9:17         ` Ilias Apalodimas
2021-05-15  2:07           ` Yunsheng Lin [this message]
2021-05-17  6:38             ` Ilias Apalodimas
2021-05-17  8:25               ` Yunsheng Lin
2021-05-17  9:36                 ` Ilias Apalodimas
2021-05-17 11:10                   ` Yunsheng Lin
2021-05-17 11:35                     ` Ilias Apalodimas
2021-05-13 16:58 ` [PATCH net-next v5 4/5] mvpp2: recycle buffers Matteo Croce
2021-05-13 18:20   ` Russell King (Oracle)
2021-05-13 23:52     ` Matteo Croce
2021-05-13 16:58 ` [PATCH net-next v5 5/5] mvneta: " Matteo Croce
2021-05-13 18:25   ` Russell King (Oracle)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cd0c0a2b-986e-a672-de7e-798ab2843d76@huawei.com \
    --to=linyunsheng@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=alobakin@pm.me \
    --cc=andrew@lunn.ch \
    --cc=arnd@arndb.de \
    --cc=ast@kernel.org \
    --cc=ayush.sawal@chelsio.com \
    --cc=borisp@nvidia.com \
    --cc=bpf@vger.kernel.org \
    --cc=cong.wang@bytedance.com \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=dsahern@gmail.com \
    --cc=edumazet@google.com \
    --cc=elver@google.com \
    --cc=fenghua.yu@intel.com \
    --cc=gnault@redhat.com \
    --cc=guro@fb.com \
    --cc=haokexin@gmail.com \
    --cc=hawk@kernel.org \
    --cc=hughd@google.com \
    --cc=ilias.apalodimas@linaro.org \
    --cc=jakub@cloudflare.com \
    --cc=jgg@ziepe.ca \
    --cc=john.fastabend@gmail.com \
    --cc=jonathan.lemon@gmail.com \
    --cc=kuba@kernel.org \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=linux@armlinux.org.uk \
    --cc=lorenzo@kernel.org \
    --cc=mcroce@linux.microsoft.com \
    --cc=mlindner@marvell.com \
    --cc=mw@semihalf.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=peterx@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rohitm@chelsio.com \
    --cc=saeedm@nvidia.com \
    --cc=stephen@networkplumber.org \
    --cc=sven.auhagen@voleatech.de \
    --cc=tariqt@nvidia.com \
    --cc=thomas.petazzoni@bootlin.com \
    --cc=vbabka@suse.cz \
    --cc=vinay.yadav@chelsio.com \
    --cc=wenxu@ucloud.cn \
    --cc=will@kernel.org \
    --cc=willemb@google.com \
    --cc=willy@infradead.org \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).