linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Alexander Lobakin <alobakin@pm.me>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: Alexander Lobakin <alobakin@pm.me>,
	Andrew Morton <akpm@linux-foundation.org>,
	Chuck Lever <chuck.lever@oracle.com>,
	Jesper Dangaard Brouer <brouer@redhat.com>,
	Christoph Hellwig <hch@infradead.org>,
	Alexander Duyck <alexander.duyck@gmail.com>,
	Matthew Wilcox <willy@infradead.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Linux-Net <netdev@vger.kernel.org>, Linux-MM <linux-mm@kvack.org>,
	Linux-NFS <linux-nfs@vger.kernel.org>
Subject: Re: [PATCH 0/7 v4] Introduce a bulk order-0 page allocator with two in-tree users
Date: Wed, 17 Mar 2021 16:31:07 +0000	[thread overview]
Message-ID: <20210317163055.800210-1-alobakin@pm.me> (raw)
In-Reply-To: <20210312154331.32229-1-mgorman@techsingularity.net>

From: Mel Gorman <mgorman@techsingularity.net>
Date: Fri, 12 Mar 2021 15:43:24 +0000

Hi there,

> This series is based on top of Matthew Wilcox's series "Rationalise
> __alloc_pages wrapper" and does not apply to 5.12-rc2. If you want to
> test and are not using Andrew's tree as a baseline, I suggest using the
> following git tree
>
> git://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git mm-bulk-rebase-v4r2

I gave this series a go on my setup, it showed a bump of 10 Mbps on
UDP forwarding, but dropped TCP forwarding by almost 50 Mbps.

(4 core 1.2GHz MIPS32 R2, page size of 16 Kb, Page Pool order-0
allocations with MTU of 1508 bytes, linear frames via build_skb(),
GRO + TSO/USO)

I didn't have time to drill into the code, so for now can't provide
any additional details. You can request anything you need though and
I'll try to find a window to collect it.

> Note to Chuck and Jesper -- as this is a cross-subsystem series, you may
> want to send the sunrpc and page_pool pre-requisites (patches 4 and 6)
> directly to the subsystem maintainers. While sunrpc is low-risk, I'm
> vaguely aware that there are other prototype series on netdev that affect
> page_pool. The conflict should be obvious in linux-next.
>
> Changelog since v3
> o Rebase on top of Matthew's series consolidating the alloc_pages API
> o Rename alloced to allocated
> o Split out preparation patch for prepare_alloc_pages
> o Defensive check for bulk allocation or <= 0 pages
> o Call single page allocation path only if no pages were allocated
> o Minor cosmetic cleanups
> o Reorder patch dependencies by subsystem. As this is a cross-subsystem
>   series, the mm patches have to be merged before the sunrpc and net
>   users.
>
> Changelog since v2
> o Prep new pages with IRQs enabled
> o Minor documentation update
>
> Changelog since v1
> o Parenthesise binary and boolean comparisons
> o Add reviewed-bys
> o Rebase to 5.12-rc2
>
> This series introduces a bulk order-0 page allocator with sunrpc and
> the network page pool being the first users. The implementation is not
> particularly efficient and the intention is to iron out what the semantics
> of the API should have for users. Once the semantics are ironed out, it
> can be made more efficient. Despite that, this is a performance-related
> for users that require multiple pages for an operation without multiple
> round-trips to the page allocator. Quoting the last patch for the
> high-speed networking use-case.
>
>     For XDP-redirect workload with 100G mlx5 driver (that use page_pool)
>     redirecting xdp_frame packets into a veth, that does XDP_PASS to
>     create an SKB from the xdp_frame, which then cannot return the page
>     to the page_pool. In this case, we saw[1] an improvement of 18.8%
>     from using the alloc_pages_bulk API (3,677,958 pps -> 4,368,926 pps).
>
> Both users in this series are corner cases (NFS and high-speed networks)
> so it is unlikely that most users will see any benefit in the short
> term. Potential other users are batch allocations for page cache
> readahead, fault around and SLUB allocations when high-order pages are
> unavailable. It's unknown how much benefit would be seen by converting
> multiple page allocation calls to a single batch or what difference it may
> make to headline performance. It's a chicken and egg problem given that
> the potential benefit cannot be investigated without an implementation
> to test against.
>
> Light testing passed, I'm relying on Chuck and Jesper to test the target
> users more aggressively but both report performance improvements with
> the initial RFC.
>
> Patch 1 moves GFP flag initialision to prepare_alloc_pages
>
> Patch 2 renames a variable name that is particularly unpopular
>
> Patch 3 adds a bulk page allocator
>
> Patch 4 is a sunrpc cleanup that is a pre-requisite.
>
> Patch 5 is the sunrpc user. Chuck also has a patch which further caches
> 	pages but is not included in this series. It's not directly
> 	related to the bulk allocator and as it caches pages, it might
> 	have other concerns (e.g. does it need a shrinker?)
>
> Patch 6 is a preparation patch only for the network user
>
> Patch 7 converts the net page pool to the bulk allocator for order-0 pages.
>
>  include/linux/gfp.h   |  12 ++++
>  mm/page_alloc.c       | 149 +++++++++++++++++++++++++++++++++++++-----
>  net/core/page_pool.c  | 101 +++++++++++++++++-----------
>  net/sunrpc/svc_xprt.c |  47 +++++++++----
>  4 files changed, 240 insertions(+), 69 deletions(-)
>
> --
> 2.26.2

Thanks,
Al



  parent reply	other threads:[~2021-03-17 17:13 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-12 15:43 [PATCH 0/7 v4] Introduce a bulk order-0 page allocator with two in-tree users Mel Gorman
2021-03-12 15:43 ` [PATCH 1/7] mm/page_alloc: Move gfp_allowed_mask enforcement to prepare_alloc_pages Mel Gorman
2021-03-19 16:11   ` Vlastimil Babka
2021-03-19 17:49     ` Mel Gorman
2021-03-12 15:43 ` [PATCH 2/7] mm/page_alloc: Rename alloced to allocated Mel Gorman
2021-03-19 16:22   ` Vlastimil Babka
2021-03-12 15:43 ` [PATCH 3/7] mm/page_alloc: Add a bulk page allocator Mel Gorman
2021-03-19 18:18   ` Vlastimil Babka
2021-03-22  8:30     ` Mel Gorman
2021-03-12 15:43 ` [PATCH 4/7] SUNRPC: Set rq_page_end differently Mel Gorman
2021-03-12 15:43 ` [PATCH 5/7] SUNRPC: Refresh rq_pages using a bulk page allocator Mel Gorman
2021-03-12 18:44   ` Alexander Duyck
2021-03-12 19:22     ` Chuck Lever III
2021-03-13 12:59       ` Mel Gorman
2021-03-12 15:43 ` [PATCH 6/7] net: page_pool: refactor dma_map into own function page_pool_dma_map Mel Gorman
2021-03-12 15:43 ` [PATCH 7/7] net: page_pool: use alloc_pages_bulk in refill code path Mel Gorman
2021-03-12 19:44   ` Alexander Duyck
2021-03-12 20:05     ` Ilias Apalodimas
2021-03-15 13:39       ` Jesper Dangaard Brouer
2021-03-13 13:30     ` Mel Gorman
2021-03-17 16:31 ` Alexander Lobakin [this message]
2021-03-17 16:38   ` [PATCH 0/7 v4] Introduce a bulk order-0 page allocator with two in-tree users Jesper Dangaard Brouer
2021-03-17 16:52     ` Alexander Lobakin
2021-03-17 17:19       ` Jesper Dangaard Brouer
2021-03-17 22:25         ` Alexander Lobakin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210317163055.800210-1-alobakin@pm.me \
    --to=alobakin@pm.me \
    --cc=akpm@linux-foundation.org \
    --cc=alexander.duyck@gmail.com \
    --cc=brouer@redhat.com \
    --cc=chuck.lever@oracle.com \
    --cc=hch@infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=mgorman@techsingularity.net \
    --cc=netdev@vger.kernel.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).