bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Yunsheng Lin <linyunsheng@huawei.com>
To: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Cc: Jesper Dangaard Brouer <jbrouer@redhat.com>,
	<davem@davemloft.net>, <kuba@kernel.org>,
	<linuxarm@openeuler.org>, <yisen.zhuang@huawei.com>,
	<salil.mehta@huawei.com>, <thomas.petazzoni@bootlin.com>,
	<mw@semihalf.com>, <linux@armlinux.org.uk>, <hawk@kernel.org>,
	<ast@kernel.org>, <daniel@iogearbox.net>,
	<john.fastabend@gmail.com>, <akpm@linux-foundation.org>,
	<peterz@infradead.org>, <will@kernel.org>, <willy@infradead.org>,
	<vbabka@suse.cz>, <fenghua.yu@intel.com>, <guro@fb.com>,
	<peterx@redhat.com>, <feng.tang@intel.com>, <jgg@ziepe.ca>,
	<mcroce@microsoft.com>, <hughd@google.com>,
	<jonathan.lemon@gmail.com>, <alobakin@pm.me>,
	<willemb@google.com>, <wenxu@ucloud.cn>,
	<cong.wang@bytedance.com>, <haokexin@gmail.com>,
	<nogikh@google.com>, <elver@google.com>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <bpf@vger.kernel.org>,
	Alexander Duyck <alexander.duyck@gmail.com>
Subject: Re: [PATCH net-next RFC 1/2] page_pool: add page recycling support based on elevated refcnt
Date: Tue, 6 Jul 2021 14:46:07 +0800	[thread overview]
Message-ID: <33aee58e-b1d5-ce7b-1576-556d0da28560@huawei.com> (raw)
In-Reply-To: <YOPiHzVkKhdHmxLB@enceladus>

On 2021/7/6 12:54, Ilias Apalodimas wrote:
> Hi Yunsheng,
> 
> Thanks for having a look!

Hi,

Thanks for reviewing.

> 
> On Fri, Jul 02, 2021 at 06:15:13PM +0800, Yunsheng Lin wrote:
>> On 2021/7/2 17:42, Jesper Dangaard Brouer wrote:
>>>
>>> On 30/06/2021 11.17, Yunsheng Lin wrote:
>>>> Currently page pool only support page recycling only when
>>>> refcnt of page is one, which means it can not support the
>>>> split page recycling implemented in the most ethernet driver.
>>>
>>> Cc. Alex Duyck as I consider him an expert in this area.
>>
>> Thanks.
>>
>>>
>>>
>>>> So add elevated refcnt support in page pool, and support
>>>> allocating page frag to enable multi-frames-per-page based
>>>> on the elevated refcnt support.
>>>>
>>>> As the elevated refcnt is per page, and there is no space
>>>> for that in "struct page" now, so add a dynamically allocated
>>>> "struct page_pool_info" to record page pool ptr and refcnt
>>>> corrsponding to a page for now. Later, we can recycle the
>>>> "struct page_pool_info" too, or use part of page memory to
>>>> record pp_info.
>>>
>>> I'm not happy with allocating a memory (slab) object "struct page_pool_info" per page.
>>>
>>> This also gives us an extra level of indirection.
>>
>> I'm not happy with that either, if there is better way to
>> avoid that, I will be happy to change it:)
> 
> I think what we have to answer here is, do we want and does it make sense
> for page_pool to do the housekeeping of the buffer splitting or are we
> better of having each driver do that.  IIRC your previous patch on top of
> the original recycling patchset was just 'atomic' refcnts on top of page pool.

You are right that driver was doing the the buffer splitting in previous
patch.

The reason why I abandoned that is:
1. Currently the meta-data of page in the driver is per desc, which means
   it might not be able to use first half of a page for a desc, and the
   second half of the same page for another desc, this ping-pong way of
   reusing the whole page for only one desc in the driver seems unnecessary
   and waste a lot of memory when there is already reusing in the page pool.

2. Easy use of API for the driver too, which means the driver uses
   page_pool_dev_alloc_frag() and page_pool_put_full_page() for elevated
   refcnt case, corresponding to page_pool_dev_alloc_pages() and
   page_pool_put_full_page() for non-elevated refcnt case, the driver does
   not need to worry about the meta-data of a page.

> 
> I think I'd prefer each driver having it's own meta-data of how he splits
> the page, mostly due to hardware diversity, but tbh I don't have any
> strong preference atm.

Usually how the driver split the page is fixed for a given rx configuration(
like MTU), so the driver is able to pass that info to page pool.


> 
>>
>>>
>>>
>>> You are also adding a page "frag" API inside page pool, which I'm not 100% convinced belongs inside page_pool APIs.
>>>
>>> Please notice the APIs that Alex Duyck added in mm/page_alloc.c:
>>
>> Actually, that is where the idea of using "page frag" come from.
>>
>> Aside from the performance improvement, there is memory usage
>> decrease for 64K page size kernel, which means a 64K page can
>> be used by 32 description with 2k buffer size, and that is a
>> lot of memory saving for 64 page size kernel comparing to the
>> current split page reusing implemented in the driver.
>>
> 
> Whether the driver or page_pool itself keeps the meta-data, the outcome
> here won't change.  We'll still be able to use page frags.

As above, it is the ping-pong way of reusing when the driver keeps the
meta-data, and it is page-frag way of reusing when the page pool keeps
the meta-data.

I am not sure if the page-frag way of reusing is possible when we still
keep the meta-data in the driver, which seems very complex at the initial
thinking.

> 
> 
> Cheers
> /Ilias
>>
>>>
>>>  __page_frag_cache_refill() + __page_frag_cache_drain() + page_frag_alloc_align()
>>>
>>>
>>
>> [...]
> .
> 

  reply	other threads:[~2021-07-06  6:46 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-30  9:17 [PATCH net-next RFC 0/2] add elevated refcnt support for page pool Yunsheng Lin
2021-06-30  9:17 ` [PATCH net-next RFC 1/2] page_pool: add page recycling support based on elevated refcnt Yunsheng Lin
2021-07-02  9:42   ` Jesper Dangaard Brouer
2021-07-02 10:15     ` Yunsheng Lin
2021-07-06  4:54       ` Ilias Apalodimas
2021-07-06  6:46         ` Yunsheng Lin [this message]
2021-07-06  8:18           ` Ilias Apalodimas
2021-07-06 20:45   ` Alexander Duyck
2021-07-07  3:05     ` Yunsheng Lin
2021-07-07 15:01       ` Alexander Duyck
2021-07-07 19:03         ` Ilias Apalodimas
2021-07-07 21:49           ` Alexander Duyck
2021-07-08 14:21             ` Ilias Apalodimas
2021-07-08 14:24               ` Alexander Duyck
2021-07-08 14:50                 ` Ilias Apalodimas
2021-07-08 15:17                   ` Ilias Apalodimas
2021-07-08 15:29                     ` Alexander Duyck
2021-07-08 15:36                       ` Ilias Apalodimas
2021-07-08 15:41                         ` Alexander Duyck
2021-07-08 15:47                           ` Ilias Apalodimas
2021-07-08  2:27         ` Yunsheng Lin
2021-07-08 15:36           ` Alexander Duyck
2021-07-09  6:26             ` Yunsheng Lin
2021-07-09 14:15               ` Alexander Duyck
2021-07-10  9:16                 ` [Linuxarm] " Yunsheng Lin
2021-06-30  9:17 ` [PATCH net-next RFC 2/2] net: hns3: support skb's frag page recycling based on page pool Yunsheng Lin
2021-07-02  8:36 ` [PATCH net-next RFC 0/2] add elevated refcnt support for " Ilias Apalodimas
2021-07-02 13:39 ` Matteo Croce
2021-07-06 15:51   ` Russell King (Oracle)
2021-07-06 23:19     ` Matteo Croce
2021-07-07 16:50       ` Marcin Wojtas
2021-07-09  4:15         ` Matteo Croce
2021-07-09  6:40           ` Yunsheng Lin
2021-07-09  6:42             ` Ilias Apalodimas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=33aee58e-b1d5-ce7b-1576-556d0da28560@huawei.com \
    --to=linyunsheng@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=alexander.duyck@gmail.com \
    --cc=alobakin@pm.me \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=cong.wang@bytedance.com \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=elver@google.com \
    --cc=feng.tang@intel.com \
    --cc=fenghua.yu@intel.com \
    --cc=guro@fb.com \
    --cc=haokexin@gmail.com \
    --cc=hawk@kernel.org \
    --cc=hughd@google.com \
    --cc=ilias.apalodimas@linaro.org \
    --cc=jbrouer@redhat.com \
    --cc=jgg@ziepe.ca \
    --cc=john.fastabend@gmail.com \
    --cc=jonathan.lemon@gmail.com \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux@armlinux.org.uk \
    --cc=linuxarm@openeuler.org \
    --cc=mcroce@microsoft.com \
    --cc=mw@semihalf.com \
    --cc=netdev@vger.kernel.org \
    --cc=nogikh@google.com \
    --cc=peterx@redhat.com \
    --cc=peterz@infradead.org \
    --cc=salil.mehta@huawei.com \
    --cc=thomas.petazzoni@bootlin.com \
    --cc=vbabka@suse.cz \
    --cc=wenxu@ucloud.cn \
    --cc=will@kernel.org \
    --cc=willemb@google.com \
    --cc=willy@infradead.org \
    --cc=yisen.zhuang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).