From: Jesper Dangaard Brouer <jbrouer@redhat.com>
To: Yunsheng Lin <linyunsheng@huawei.com>,
davem@davemloft.net, kuba@kernel.org
Cc: brouer@redhat.com, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org, linuxarm@openeuler.org,
hawk@kernel.org, ilias.apalodimas@linaro.org,
jonathan.lemon@gmail.com, alobakin@pm.me, willemb@google.com,
cong.wang@bytedance.com, pabeni@redhat.com, haokexin@gmail.com,
nogikh@google.com, elver@google.com, memxor@gmail.com,
edumazet@google.com, alexander.duyck@gmail.com,
dsahern@gmail.com
Subject: Re: [PATCH net-next 2/7] page_pool: support non-split page with PP_FLAG_PAGE_FRAG
Date: Thu, 23 Sep 2021 14:08:03 +0200 [thread overview]
Message-ID: <c85a4ecc-80bb-d78f-d72a-0f820fb02eb9@redhat.com> (raw)
In-Reply-To: <20210922094131.15625-3-linyunsheng@huawei.com>
On 22/09/2021 11.41, Yunsheng Lin wrote:
> Currently when PP_FLAG_PAGE_FRAG is set, the caller is not
> expected to call page_pool_alloc_pages() directly because of
> the PP_FLAG_PAGE_FRAG checking in __page_pool_put_page().
>
> The patch removes the above checking to enable non-split page
> support when PP_FLAG_PAGE_FRAG is set.
>
> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> ---
> net/core/page_pool.c | 12 +++++++-----
> 1 file changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index a65bd7972e37..f7e71dcb6a2e 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -315,11 +315,14 @@ struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp)
>
> /* Fast-path: Get a page from cache */
> page = __page_pool_get_cached(pool);
> - if (page)
> - return page;
>
> /* Slow-path: cache empty, do real allocation */
> - page = __page_pool_alloc_pages_slow(pool, gfp);
> + if (!page)
> + page = __page_pool_alloc_pages_slow(pool, gfp);
> +
> + if (likely(page))
> + page_pool_set_frag_count(page, 1);
> +
I really don't like that you add one atomic_long_set operation per page
alloc call.
This is a fast-path for XDP use-cases, which you are ignoring as you
drivers doesn't implement XDP.
As I cannot ask you to run XDP benchmarks, I fortunately have some
page_pool specific microbenchmarks you can run instead.
I will ask you to provide before and after results from running these
benchmarks [1] and [2].
[1]
https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/lib/bench_page_pool_simple.c
[2]
https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/lib/bench_page_pool_cross_cpu.c
How to use these module is documented here[3]:
[3]
https://prototype-kernel.readthedocs.io/en/latest/prototype-kernel/build-process.html
> return page;
> }
> EXPORT_SYMBOL(page_pool_alloc_pages);
> @@ -428,8 +431,7 @@ __page_pool_put_page(struct page_pool *pool, struct page *page,
> unsigned int dma_sync_size, bool allow_direct)
> {
> /* It is not the last user for the page frag case */
> - if (pool->p.flags & PP_FLAG_PAGE_FRAG &&
> - page_pool_atomic_sub_frag_count_return(page, 1))
> + if (page_pool_atomic_sub_frag_count_return(page, 1))
> return NULL;
This adds an atomic_long_read, even when PP_FLAG_PAGE_FRAG is not set.
>
> /* This allocator is optimized for the XDP mode that uses
>
next prev parent reply other threads:[~2021-09-23 12:08 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-22 9:41 [PATCH net-next 0/7] some optimization for page pool Yunsheng Lin
2021-09-22 9:41 ` [PATCH net-next 1/7] page_pool: disable dma mapping support for 32-bit arch with 64-bit DMA Yunsheng Lin
2021-09-23 9:10 ` Ilias Apalodimas
2021-09-23 9:33 ` Jesper Dangaard Brouer
2021-09-23 10:02 ` Ilias Apalodimas
2021-09-23 11:13 ` Yunsheng Lin
2021-09-23 13:07 ` Ilias Apalodimas
2021-09-24 7:04 ` Yunsheng Lin
2021-09-24 7:25 ` Ilias Apalodimas
2021-09-24 8:01 ` Yunsheng Lin
2021-09-22 9:41 ` [PATCH net-next 2/7] page_pool: support non-split page with PP_FLAG_PAGE_FRAG Yunsheng Lin
2021-09-23 12:08 ` Jesper Dangaard Brouer [this message]
2021-09-24 7:23 ` Yunsheng Lin
2021-09-30 7:28 ` [Linuxarm] " Yunsheng Lin
2021-09-22 9:41 ` [PATCH net-next 3/7] pool_pool: avoid calling compound_head() for skb frag page Yunsheng Lin
2021-09-23 8:33 ` Ilias Apalodimas
2021-09-23 11:24 ` Yunsheng Lin
2021-09-23 11:47 ` Ilias Apalodimas
2021-09-24 7:33 ` Yunsheng Lin
2021-09-24 7:44 ` Ilias Apalodimas
2021-09-22 9:41 ` [PATCH net-next 4/7] page_pool: change BIAS_MAX to support incrementing Yunsheng Lin
2021-09-22 9:41 ` [PATCH net-next 5/7] skbuff: keep track of pp page when __skb_frag_ref() is called Yunsheng Lin
2021-09-22 9:41 ` [PATCH net-next 6/7] skbuff: only use pp_magic identifier for a skb' head page Yunsheng Lin
2021-09-22 9:41 ` [PATCH net-next 7/7] skbuff: remove unused skb->pp_recycle Yunsheng Lin
2021-09-23 7:07 ` [PATCH net-next 0/7] some optimization for page pool Ilias Apalodimas
2021-09-23 11:12 ` Yunsheng Lin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c85a4ecc-80bb-d78f-d72a-0f820fb02eb9@redhat.com \
--to=jbrouer@redhat.com \
--cc=alexander.duyck@gmail.com \
--cc=alobakin@pm.me \
--cc=brouer@redhat.com \
--cc=cong.wang@bytedance.com \
--cc=davem@davemloft.net \
--cc=dsahern@gmail.com \
--cc=edumazet@google.com \
--cc=elver@google.com \
--cc=haokexin@gmail.com \
--cc=hawk@kernel.org \
--cc=ilias.apalodimas@linaro.org \
--cc=jonathan.lemon@gmail.com \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxarm@openeuler.org \
--cc=linyunsheng@huawei.com \
--cc=memxor@gmail.com \
--cc=netdev@vger.kernel.org \
--cc=nogikh@google.com \
--cc=pabeni@redhat.com \
--cc=willemb@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.