linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Chuck Lever III <chuck.lever@oracle.com>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Vlastimil Babka <vbabka@suse.cz>,
	Jesper Dangaard Brouer <brouer@redhat.com>,
	Christoph Hellwig <hch@infradead.org>,
	Alexander Duyck <alexander.duyck@gmail.com>,
	Matthew Wilcox <willy@infradead.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Linux-Net <netdev@vger.kernel.org>, Linux-MM <linux-mm@kvack.org>,
	Linux NFS Mailing List <linux-nfs@vger.kernel.org>
Subject: Re: [PATCH 0/3 v5] Introduce a bulk order-0 page allocator
Date: Mon, 22 Mar 2021 18:25:03 +0000	[thread overview]
Message-ID: <C1DEE677-47B2-4B12-BA70-6E29F0D199D9@oracle.com> (raw)
In-Reply-To: <20210322091845.16437-1-mgorman@techsingularity.net>



> On Mar 22, 2021, at 5:18 AM, Mel Gorman <mgorman@techsingularity.net> wrote:
> 
> This series is based on top of Matthew Wilcox's series "Rationalise
> __alloc_pages wrapper" and does not apply to 5.12-rc2. If you want to
> test and are not using Andrew's tree as a baseline, I suggest using the
> following git tree
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git mm-bulk-rebase-v5r9
> 
> The users of the API have been dropped in this version as the callers
> need to check whether they prefer an array or list interface (whether
> preference is based on convenience or performance).

I now have a consumer implementation that uses the array
API. If I understand the contract correctly, the return
value is the last array index that __alloc_pages_bulk()
visits. My consumer uses the return value to determine
if it needs to call the allocator again.

It is returning some confusing (to me) results. I'd like
to get these resolved before posting any benchmark
results.

1. When it has visited every array element, it returns the
same value as was passed in @nr_pages. That's the N + 1th
array element, which shouldn't be touched. Should the
allocator return nr_pages - 1 in the fully successful case?
Or should the documentation describe the return value as
"the number of elements visited" ?

2. Frequently the allocator returns a number smaller than
the total number of elements. As you may recall, sunrpc
will delay a bit (via a call to schedule_timeout) then call
again. This is supposed to be a rare event, and the delay
is substantial. But with the array-based API, a not-fully-
successful allocator call seems to happen more than half
the time. Is that expected? I'm calling with GFP_KERNEL,
seems like the allocator should be trying harder.

3. Is the current design intended so that if the consumer
does call again, is it supposed to pass in the array address
+ the returned index (and @nr_pages reduced by the returned
index) ?

Thanks for all your hard work, Mel.


> Changelog since v4
> o Drop users of the API
> o Remove free_pages_bulk interface, no users
> o Add array interface
> o Allocate single page if watermark checks on local zones fail
> 
> Changelog since v3
> o Rebase on top of Matthew's series consolidating the alloc_pages API
> o Rename alloced to allocated
> o Split out preparation patch for prepare_alloc_pages
> o Defensive check for bulk allocation or <= 0 pages
> o Call single page allocation path only if no pages were allocated
> o Minor cosmetic cleanups
> o Reorder patch dependencies by subsystem. As this is a cross-subsystem
>  series, the mm patches have to be merged before the sunrpc and net
>  users.
> 
> Changelog since v2
> o Prep new pages with IRQs enabled
> o Minor documentation update
> 
> Changelog since v1
> o Parenthesise binary and boolean comparisons
> o Add reviewed-bys
> o Rebase to 5.12-rc2
> 
> This series introduces a bulk order-0 page allocator with the
> intent that sunrpc and the network page pool become the first users.
> The implementation is not particularly efficient and the intention is to
> iron out what the semantics of the API should have for users. Despite
> that, this is a performance-related enhancement for users that require
> multiple pages for an operation without multiple round-trips to the page
> allocator. Quoting the last patch for the prototype high-speed networking
> use-case.
> 
>    For XDP-redirect workload with 100G mlx5 driver (that use page_pool)
>    redirecting xdp_frame packets into a veth, that does XDP_PASS to
>    create an SKB from the xdp_frame, which then cannot return the page
>    to the page_pool. In this case, we saw[1] an improvement of 18.8%
>    from using the alloc_pages_bulk API (3,677,958 pps -> 4,368,926 pps).
> 
> Both potential users in this series are corner cases (NFS and high-speed
> networks) so it is unlikely that most users will see any benefit in the
> short term. Other potential other users are batch allocations for page
> cache readahead, fault around and SLUB allocations when high-order pages
> are unavailable. It's unknown how much benefit would be seen by converting
> multiple page allocation calls to a single batch or what difference it may
> make to headline performance. It's a chicken and egg problem given that
> the potential benefit cannot be investigated without an implementation
> to test against.
> 
> Light testing passed, I'm relying on Chuck and Jesper to test their
> implementations, choose whether to use lists or arrays and document
> performance gains/losses in the changelogs.
> 
> Patch 1 renames a variable name that is particularly unpopular
> 
> Patch 2 adds a bulk page allocator
> 
> Patch 3 adds an array-based version of the bulk allocator
> 
> include/linux/gfp.h |  18 +++++
> mm/page_alloc.c     | 171 ++++++++++++++++++++++++++++++++++++++++++--
> 2 files changed, 185 insertions(+), 4 deletions(-)
> 
> -- 
> 2.26.2
> 

--
Chuck Lever





  parent reply	other threads:[~2021-03-22 18:25 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-22  9:18 [PATCH 0/3 v5] Introduce a bulk order-0 page allocator Mel Gorman
2021-03-22  9:18 ` [PATCH 1/3] mm/page_alloc: Rename alloced to allocated Mel Gorman
2021-03-22  9:18 ` [PATCH 2/3] mm/page_alloc: Add a bulk page allocator Mel Gorman
2021-03-23 16:00   ` Jesper Dangaard Brouer
2021-03-23 18:43     ` Mel Gorman
2021-03-22  9:18 ` [PATCH 3/3] mm/page_alloc: Add an array-based interface to the " Mel Gorman
2021-03-22 12:04 ` [PATCH 0/3 v5] Introduce a bulk order-0 " Jesper Dangaard Brouer
2021-03-22 16:44   ` Mel Gorman
2021-03-22 18:25 ` Chuck Lever III [this message]
2021-03-22 19:49   ` Mel Gorman
2021-03-22 20:32     ` Chuck Lever III
2021-03-22 20:58       ` Mel Gorman
2021-03-23 11:08         ` Jesper Dangaard Brouer
2021-03-23 14:45           ` Mel Gorman
2021-03-23 18:52             ` Chuck Lever III
2021-03-23 11:13       ` Matthew Wilcox
2021-03-23 10:44 ` Mel Gorman
2021-03-23 15:08   ` Jesper Dangaard Brouer
2021-03-23 16:29     ` Mel Gorman
2021-03-23 17:06     ` Jesper Dangaard Brouer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=C1DEE677-47B2-4B12-BA70-6E29F0D199D9@oracle.com \
    --to=chuck.lever@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=alexander.duyck@gmail.com \
    --cc=brouer@redhat.com \
    --cc=hch@infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=mgorman@techsingularity.net \
    --cc=netdev@vger.kernel.org \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).